Technologies we use



Latest blog posts

OpenSource OpenAPI Boilerplates - NodeJS & Java

We, at Jiratech, strive to empower innovation and build customer-driven innovative products with embedded immune system*.

We also have a program, Jiratech Foundation, in which we give back to the community and engage in social projects ( more on this, here: ).


On this note, those are the boilerplates that we created and we use regularly:


1) Java Spring Boilerplate: dockerized, API-first 3 layer architecture with PostgreSQL and Minio connectors.



2) NodeJS Boilerplate: ExpressJs-based, typescript-adapted, 3 layer architecture with PostgreSQL connector.



Those boilerplates have a unique characteristic - full integration with OpenApi v3 ( documentation here: We wrote a tutorial on integration with ReactJS here.


Those boilerplates use a single, descriptive JSON of the APIs and DTO ( for typescript, interfaces) that map and enforce the request and response and also generate the APIs ( Java) / handle the route components ( NodeJs).

You can find this schema here:


I hope this helps you speed up reliable development as it did for us!

If you want to contribute or comment, we welcome any feedback

Coming soon: ReactJS boilerplate and React-Native boilerplate with TypeScript and OpenApi.

Shape OneShape TwoShape ThreeShape FourShape Five

Integrate OpenApi specification using openapi-generator to a ReactJs project with Typescript and Axios

As a frontend engineer, I am always looking for ways to be more effective and efficient. So the main scope is to be more productive in building the architecture of the application and get rid of the small implementations that consume time. I have spent endless hours changing the API call layer, including the DTO’s. Alongside my team, we found a way to have a common ground for frontend and backend engineers to communicate more easily and be more productive. In this article I want to make you work smarter :

- by generating functions for api calls and generating interfaces of your main objects in the application;

- by having more transparency when there is a change in the openapi specification of your project



OpenApi specification defines a programming language-agnostic interface description for REST APIs. Read more OpeApi Specification



Given an OpenApi specification (JSON or YAML file), we would like to generate the following:

- interfaces of the DTO’s;

- functions that can be used for calling APi’s;

This generated data will bring all the information you need, not just for what routes are exposed, but also which data types they use to communicate (request/response parameters).


How to accomplish these goals?

Before starting you will need a JSON or YAML file. You can read more about the format of this file on the following link OpenApi Specification Format.

This is a simple example of how a JSON file will look like: Exemple OpenApi Specification


Using this example we must take the following steps:

1. Choose a code generation tool.

For the frontend application I use OpenApi Generator.

Pros: It is straightforward and you don’t need further configuration in the application. You just need to modify the options of the command line used for generating the code (OpenApi Generator Typescript-Axios).

Cons: You need to have Java installed on your machine/docker

Also there are also other libraries that you can use:

- OpenAPi client (OpenAPi Client);

- Sw2dts (Sw2dts);

2. Define the command line that will be used to generate the code.

I use the following command line:

openapi-generator generate -i [*json file path*] -g [* client generator*] -o [*folder path where you want to generate the code*]

Now you can add this in your package.json scripts category.

I use ‘typescript-axios’ as *client generator*. I use axios because it is a popular library for http calls and I use it in most of my React projects.

I prefer to add the following option ‘ — model-name-prefix I’, because this will add to the name of Interfaces the prefix ‘I’.

Why? Maybe you need to define classes that implement these interfaces for setting some default values of the fields. This is a naming convention that will help you, especially when you will be importing both of them into other files.


How to use the generated code in your application?

One of the goals was to use generated functions for making api calls. To achieve this goal we need to import the class UserApi from the generated folder and instantiate it: const userApi = new UserApi(). The instance of the class contains all the functions that you need for making api calls to the server.

But now some questions arise:

- How do we write the config for the api call (ex: headers, timeout, baseUrl)?

- How do we add interceptors for the calls (request/response interceptors)?

Well.. the answer is quite simple. You can create an axios instance: axiosInstance: AxiosInstance = axios.create({…}). At this instance you can add interceptors and other config specifications.

After you configured your axiosInstance you can add it as a parameter when you instantiate the UserAPi:

const userApi = new UserApi(null, BASE_URL, axiosInstance);

Also if you have an api call that needs to have other configuration than the one you set up on the axiosInstance, you can override it by using the class Configuration imported from the autogenerated OpenApi folder:

const userApi = new UserApi(new Configuration({ baseOptions: {…} }), BASE_URL, axiosInstance);


Use OpenApi generated code in your sagas.

Because context in javascript is dynamic based, when you try to use a userApi instance for making an api call you will be receiving the following error: ‘this is null’.

const response = yield call(userApi.loginUser, …args); // error

To manage this error you have to provide the context in some way. The call effect supports multiple ways of providing context. You can find them in the official docs Redux Saga Call Context, here is the short version of it:

const response = yield call([userApi, userApi.loginUser], …args);


Also you need to be aware of the following:

- Never modify the generated code!

- In order to have application up to date you will need to generate the open api code at every build

- The generated code needs to be specified in the .gitignore file

- The OpenApi specification will be used by different types of applications (mobile, desktop, web, server, etc..) . So be careful when you modify them because you can impact different platforms.

- As a best practice you can keep the .json or .yaml file on a separate git repo. In this way every developer can be aware of the changes.


If you are looking for an OpenApi Specification for starting your own computer vision project, here are the standards that we are using for building our own projects:




Shape OneShape TwoShape ThreeShape FourShape Five

How to start-up your tech

Starting as a software developer in a multinational company my main driver was always improving my skills as a coder, so I was always reticent about the long-lasting meetings, the pain of setting up the environment, and all the time wasted on running and fixing tests with almost no return on investment. During each of those meetings, a thought ran through my mind asking me “Why do we keep losing time, when we could implement this thing?”


After I resigned there, I started working on a startup with some friends and the beauty of working there couldn't be compared to my last job. There were no rules, no meetings and no restrictions on the resources that we had. We didn’t need to make a request in order to have access to a repository, wait days to have a program installed or fix tests. We were just a team of four people sharing technical knowledge with one another and working day and night on their idea, using a wall full of sticky notes with features or bugs as a backlog and competing to see who would finish more of them by the end of the day. The only drawback was not having someone with more experience to guide us on the right path, so we started making mistakes, learning from them and correcting them as we were moving along.


As our codebase kept growing so was our team, and since every member was working on their little piece, we started losing the bigger picture. We couldn’t know if something done by X could impact something done by Y, and with the burnout being right behind us, we made obvious errors that someone could have foreseen them if he had the chance to review what was written.


We realized that we can’t keep working like this for much longer, our backlog beginning to fill up with regressions and technical debt. For the first time, I understood why we had all the meetings and all the restrictions in that company. So, we needed a model just like that to have stability. But we were still a startup. We couldn’t afford to lose precious time in meetings or reviewing what is done. We needed our own workflow. Something that would not add too much overhead to our productivity, but enough to assure us that the production errors will decrease, and we would be able to see an overall picture of how the development is going.


The search for tools and services that can help us automate most of that overhead started. As every company out there the best and most obvious start point was Atlassian, since we were already using Bitbucket cloud as our versioning tool. But we didn’t stop here. We added continuous integration and continuous development tools, private docker registry, error monitoring tools, an analytic engine for logs to store, search and view, automatic static analysis of code, static and dynamic security testing services, all of them interconnected with one another, all connected to the same user directory to ease the user management, choosing the opensource solution whenever one that suited our needs existed. We added and configured them with the mindset that if something can be automated, then it should be automated.


Since our team is working under a VPN we wanted to have all the tools needed for development in-house, so we installed them on-premise, in our local server. This is the point where things start to fall apart. In no more than two months a blackout fried the server’s motherboard. If it weren't for some backups that we did manually, all the data would have been down the drain, from tasks and documentation to code. But, we still lost all the hours of work installing and configuring those tools.

In order to keep this from happening again the cleanest and viable solution was Docker. Since Atlassian and some other third-party providers don’t support docker, we used images created by the community or we created images to suit our needs. Everything was working, every volume was stored on our network-attached storage, and we could have replicated that on any virtual machine with a simple docker-compose command. But it wasn’t scalable. And if something would have happened to a container, by the time we figure it out is down and what is the problem, it was already too late.


To mitigate this issue, the solution at hand was to move everything in Kubernetes. Combined with Rancher, all the tools are scalable, every piece of data is backed-up every day and we get notified about what’s not working and why is not working.


By the beginning of 2020, our try to have some order and a prediction in the development process turned into a whole project, deployable and replicate-able with just one click, with self-scalable services that are constantly monitored for flaws and malfunctions. From versioning tools, task managers, error handlers and notifiers, code analysis tools, log monitoring, and ci/cd, all the services that we used are under one manager that works for us as our 24/7 DevOps, providing us with metrics, statistics, and forecasts about the infrastructure.


We believed that adding an overhead to the development process will impact performance, but in fact that overhead and the time allocated on building this process, saved us months of refactoring and most importantly our relationships with the clients as if one would have called us to report a problem, our developers were already preparing a fix for that problem, knowing which task, what commit, who added that commit, how was this issue introduced and why it wasn’t caught by the tests.


This is the trade that everyone has to make at some point, in order to bring order into chaos, and we found out the hard way that the overhead is unperceivable if you are using them right. We found a balance in our workflow, and we believe that this isn’t done yet. We are in the 3rd iteration and we still think that it can be better, every day improving our system, automating the automation, having the machines work for us and not the other way around, so it would make our life and the life of people to come easier.


Shape OneShape TwoShape ThreeShape FourShape Five
We have one free spot!
Contact us!ArrowRight
We have one free spot!
Contact us!ArrowRight
We have one free spot!
Contact us!ArrowRight
We have one free spot!
Contact us!ArrowRight
We act as a technical incubator, growing startups from ground up as partners. Take us as copilots in your quest of finding the unicorn!


We put passion in everything we do, in and out of the office

We want to work together on things that are meaningful and have a positive impact on today's society

  • concentrat
  • handshake
  • barca
  • digital-colony