How often have you been annoyed by crappy written documentation in Confluence? All these endless wiki pages nested inside each other until your monitor runs out. Do you remember what it’s like to find two pages on the same topic that contradict each other; or your codebase matches with confluence wiki, only 40–60% of the actual codebase state? But much better if analysts never heard about formatting before! So, what is the reason for it?
Why?
In my opinion, the main reason is – lack of discipline and control. Comparing with software development, it’s hard to imagine that analysts are cross-reviewing each other, that they had documentation version control, and in some sense, they don’t have any “types” that can validate their expressions.
A subtle hint at statically typed languages.
Yes, they had some review and practices, but usually analysts receive a review of documentation from business people, not from tech guys, not from the people who are going to develop using this documentation. Imagine if they had some sort of CI/CD pipeline that checks their artifacts, applying some linting and checkstyle. Life would be much better.
How to take control of documentation?
So, I want to share the experience of our team that successfully avoids this confluence hell… The First step toward purification is – moving all your documentation to git repository as markdown files. So, if we have a structured format – markdown, it means that we can validate it. The Second step toward God is – to add a CI/CD pipeline with linter and checkstyle to beat up on clumsy hands that can’t format a piece of text properly. The Third step, gates into a Heaven – code review process. For now, we are close to heaven and developers can control everything that comes into knowledge base, looks cool, isn’t it? All issues that were unclear are now resolved during the review process, rather than at the time of development, thereby reducing the chance of a meeting. Now, when we come to totalitarianism in documentation, i.e. a quality gate was built – any documentation manipulations are under our control, and you can reject any nonsense bullshit in text.
Why does control help us to dodge meetings?
The most interesting part of the blog post. To avoid meetings between programmers and analysts, we need to force analysts to write documentation which is similar to the code. It will look native for programmer and will reduce the number of misunderstandings between developer and analyst. Description of databases is in SQL scripts, logic is in Markdown format, API specifications in OpenAPI or GraphQL schemas. Developers just look into the repository and see familiar things, instead of struggling with wiki-pages, tables, etc. In the next sections we will see how this approach can help us to avoid writing boilerplate code…
Boilerplate dodging
Unfortunately we use Spring, that’s why the tech part is only relevant for JVM developers, but you can read it, get some ideas and implement them using any tech-stack. All the tools we’re used here are quite popular. Imagine if your analysts were writing specifications for API, which magically transforms in code, that you can use. You might hear about API-first approach? — We will abuse it at maximum level. First of all, you need to organize your repository like this:
contracts-repo/
|
|-- common/
|-- kafka/
|-- microservices/
|–– templates/
| |
| |--schema/
| | |
| | |--schema-name.gql
| |
| |-- db/
| | |
| | |-- table-name.sql
| |
| |-- openapi.yaml
| |-- readme.md
|-- build.gradle.kts
|-- settings.gradle.kts
common
– directory for common components that can be referenced from any directories of contracts-repomicroservices
– directory for each microservice, contains folders named like services in your applicationtemplates
– template for microserviceschema
– directory for.graphql
schemasdb
– directory for schemas’ descriptions in.sql
scripts
openapi.yaml
– openapi description of your APIreadme.md
– file for description of anything that happens inside your microservice; has to be named same as microservice.build.gradle.kts
,settings.gradle.kts
– default files for Gradle project.
How it works?
Boilerplate code from OpenAPI and GraphQL schemas will be generated.
All directories inside microservices
, kafka
will be published as separate JAR
s.
The common
directory also would be published as separate JAR
.
What exactly will be produced?
DTO
s according to types that where declared- API
interfaces
, marked with Swagger and Spring annotations, like@Controller
,@PostMapping
,@RequestBody
, etc. - Feign clients for outer clients, that will communicate with our service
After generation and publishing, you can add these contracts as dependencies in your microservices.
Everything that you need to do is – just set up a publication inside of build.gradle.kts
and
add gradle-contracts-generator plugin.
The Plugin will be published soon, so stay tuned!
Please put a star if you are waiting for this plugin,
it would be some kind of signal that someone needs it and should be published faster!
Pros and Cons
Let’s start with Cons
- The main “disadvantage” is the analysts in your team, if they have any skill issues, it will be challenging to get them to work inside such a process
- You may forget to update the contracts’ dependency version periodically
- Analysts should go further than developers in 1–2 sprints.
What about Pros?
- First of all, there is a single documents’ format controlled by CI/CD
- Ability to review any changes in documentation – all changes coming from pull requests. In general, here we’re getting all benefits from git.
- Opportunity to write documentation in modern IDEs or code editors, like Writerside, IntelliJ IDEA, VS Code, etc.
- All migrations will be defined in one place
- No extra work – once it’s described, once published and used, developers don’t need to
write any boilerplate code as
DTO
- Versioning – you can switch versions of your API back and forth