S
Simon Winkler
Hi everyone,
The topic of API has been floating around here for almost 4 years now.
It is now about so much more than just a programming interface, it is about being able to interact with it in our (I'll just say keyword AI) world.
For example, we are currently building local AI bots and workflows that could save us a lot of project, support and exchange requests and could process the data from the API as we need it for our daily use.
Unfortunately, there was no response to Stefan's request last year, so I would like a brief explanation of what “long-planned” means for you here and what weighty reasons speak against it?
This is not just about “backup export”, which would actually be more secondary, but rather about targeted data access to everything else in Projo.
examples:
- Project data for an internal project database/As an acquisition aid
- Office evaluations that are packed into individual reports, may also be evaluated beforehand by a bot and pointed out in the event of outliers
- Personnel management - connection to certificate interfaces, etc.
- Automated (or semi-automated) analyses, etc.
...
Please give me a brief feedback.
Arne Semmler
Simon Winkler: What do you use the PROJO data export for in this context. If you're not using it, why not? In any case, all requirements you have formulated are, in any case, initial approximation reading requirements for projo, correct?
S
Simon Winkler
Arne Semmler
Hi Arne,
Thanks for the quick RM!
To be honest, I think Johannes Hanf has already written quite a bit about this. For me, exporting data is quite a one-way street. If I create complex scripts based on it and you make a small adjustment to the database structure (e.g. the name of a table), my script breaks down.
An API always offers a stable interface here and is well documented and then reliable in both directions.
In addition, I have no limitation when it comes to dumping. It exists completely or not at all. What I know about an API is that I can also restrict (scoping) so that a tool, for example, only accesses a very specific area of data. For DSGVO/data security reasons, I think this is very important, regardless of whether access is only read or write.
We want automations that only work with a real-time API - this is essential, even when read, so that the data corresponds to the appropriate state at the moment and not one that was written to the static file up to 24 hours ago.
Without an API, Projo is completely isolated in the long term; I think the rest of our IT world (M365, teams, AI, etc.) must be connectable to it. With M365, for example, there are standard connectors for REST/JSON, so I have no chance of starting any process at all with raw SQL data in proprietary breakdown (especially since I would have to have the secret link stored by you somehow automatically, I also find that critical, otherwise I would have to do it manually every day, just as difficult).
S
Simon Winkler
Perhaps in addition: For many scenarios, an API request (which also generates server load) would probably not be necessary at all, but some things can also be done with webhooks, the API would then only be needed for a writing process (which is relevant, for example, in an “early warning system” when I need to put something in a user-defined field in our projo control center, e.g. a note or a status change “careful” when a project is imminent, budget, to crack after deployment planning or whatever).
S
Stefan Antonitsch
Hello! Is there a current timetable for when the API connection will be implemented? Thank you guys
Arne Semmler
The “SQL dump” item has now been moved to a separate Canny point: https://projo.canny.io/feature-requests/p/api-call-sql-dump
This point is renamed to API call accordingly.
Arne Semmler
marked this post as
planned long
Arne Semmler
marked this post as
planned
Benedikt Voigt
Our current vision is that we will offer both in the long term:
1) daily backup (an SQL dump just for yourself as a company, not to be reimported into projo)
2) API interface
R
Ruben Hauser
Benedikt Voigt: API would be extremely important and useful for the future.
Arne Semmler
marked this post as
under review
J
Johannes Hanf
I would also prefer an API interface with a view to the future. The days of backing up and copying SQL databases are actually over or are no longer easy to do if you use AZURE services in the cloud instead of having physical servers in the office. Even with a database export, a lot of documentation is required in order to be able to assign the data in a meaningful way. In the case of an interface, this can be built up successively and supplemented by areas.
So clearly from my side 2).
Arne Semmler
Johannes Hanf: Hi Johannes, your objection is completely correct and is shared by us. However, we can actually provide a database dump faster than an API. And I assume that the dump can also be a useful option for purposes other than connecting BI services. We can also store the dump in suitably protected cloud storage, so that this is not a manual download, but a snapshot that is stored at a point that has yet to be defined.
We have/create the documentation of these structures for our own developers anyway, so that it can also be made available to external developers. It is clear that the dump is a first step for us and will be followed by an API. However, both are functions that are not intended for normal users, but rather for developers anyway. Either way, the documentation won't be end-user documentation.
Benedikt Voigt
marked this post as
planned
C
Christopher Yelegen
Hi Benedikt,
For starters, solution number 1 would be completely sufficient, as I don't want to write anything to the databases via an interface. The 24h delay is not that bad.
For me, the only question is: How many views and in which format are they exported? It is not just about project data, but about ALL data (personnel, invoices, customers,...)
Thanks in advance!
Benedikt Voigt
Christopher Yelegen: hello Christopher, in solution number 1) we would provide you with the complete SQL database with your data. Only the uploaded documents would not be included, but all other raw data (personnel, time, invoices, customers,...).
We want to briefly discuss this at UserGroup tomorrow morning (on June 2nd at around 9:45).
Maybe you'll join in?
Meeting ID: 832 3356 7764
Code: projo
Load More
→