Every day I see something different and new and it has reached a point of no return. Each person does it as they want, know how or as their understanding has made them understand.
I see that sometimes everyone in microservices goes crazy with how to communicate microservices. I don't quite understand why.
Knowing that there are too many ways to communicate services for now and surely in the blink of an eye there will be many more:
1 - REST APIs: (I recommend this) - Advantages: Simplicity, wide adoption, easy to understand and use. - Disadvantages: Can be less efficient for high frequency or low latency communications.2 - GraphQL: - Advantages: Flexibility in queries, allows clients to request exactly the data they need, reduces the number of API calls. - Disadvantages: Can be more complex to implement and learn, higher overhead on the server.3 - Asynchronous Messaging: - RabbitMQ, Kafka: Used for asynchronous communications, where microservices communicate through messages. - Advantages: Decoupling, scalability, resilience. - Disadvantages: Greater complexity in implementation and management.4 - gRPC: - Advantages: High efficiency, support for multiple languages, ideal for low latency communications. - Disadvantages: Greater initial complexity, less intuitive than REST.5 - Service Discovery with Eureka: - Eureka (part of Spring Cloud Netflix): Used for service discovery in microservices architectures. - Advantages: Facilitates scalability and resilience, allows services to find each other dynamically. - Disadvantages: Adds complexity and requires additional configuration.
Why all the fuss about fighting when the simplest function is through REST API with a simple call? Why complicate it by adding another technology that will probably end up being deprecated? to add another layer for security?...
Another question.
The way microservices are separated can be two ways:
1 - Parent project (spring boot with mvn/ gradle/ gradle kotlin) with modules (each module is an independent spring boot project)
2 - Independent projects
Let's go by points.
Why do I sometimes see it done in the 2nd way? For more control?
If you do it in the 2nd way - If a large project has 50 microservices and you want to update dependencies and on top of that add three additional dependencies, you will have to call each programmer to do it and finally you will have to compile one by one and then create the image one by one... finally when they have it they will have to compile each separate project and rebuild the docker image one by one...
If you do it in the first way you have unified dependencies and everyone has the same dependencies and mass update. Then if your project needs an older version, you use it:
For example: All projects would be updated with:
For example, (in my case I don't use maven, I use gradle kotlin == gradle kotlin dsl)
plugins { java id("org.springframework.boot") version "3.3.2" id("io.spring.dependency-management") version "1.1.6"}group = "com.example"version = "0.0.1-SNAPSHOT"repositories { mavenCentral()}dependencies {}// tasks.withType<Test> {// useJUnitPlatform()// }subprojects{ apply(plugin="java") apply(plugin="org.springframework.boot") apply(plugin="io.spring.dependency-management") group = "com.wammo" version = "0.0.1-SNAPSHOT" java { toolchain { languageVersion = JavaLanguageVersion.of(21) } } configurations { compileOnly { extendsFrom(configurations.annotationProcessor.get()) } } repositories { mavenCentral() } dependencies { implementation("org.springframework.boot:spring-boot-starter-web") testImplementation("org.springframework.boot:spring-boot-starter-test") compileOnly("org.projectlombok:lombok") annotationProcessor("org.projectlombok:lombok") }}
In addition, since everything is modular, each project is independent and can be deployed independently, and for the dev, since everything is unified, he simply uploads all the images in one go.
If total modularity is preferred, we can work with the notepad, with totally decoupled dependencies and these being hosted through a VPN and SSL to a server in Russia and obtained through a key hash and finally deployed in Japan from a terminal and from the same deploy uploaded to production. I think that then we would be talking about total modularity.
I am looking for an answer to these two questions:
1 - Why all the fuss about communicating microservices when the simplest function is through REST API with a simple call? Why complicate it by adding another technology that will probably end up being obsolete? To add another layer of security?...
2 - Why do I sometimes see microservices projects as separate projects? Is it done the second way? To have more control? A simple update is going to take a lot of money and time in this regard. Plus rebuilding the docker images separately... too much work.
Note: "Remember that modules are independent projects once compiled. I mention this because the typical person will come and tell me "it's to have independent projects"