Source: Cloud-native architecture with serverless microservices — the Smart Parking story from Google Cloud Platform
By Brian Granatir, SmartCloud Engineering Team Lead, Smart Parking
Editor’s note: When it comes to microservices, a lot of developers ask why they would want to manage many services rather than a single, big, monolithic application? Serverless frameworks make doing microservices much easier because they remove a lot of the service management overhead around scaling, updating and reliability. In this first installment of a three-part series, Google Cloud Platform customer Smart Parking gives us their take on event-driven architecture using serverless microservices on GCP. Then read on for parts two and three, where they walk through how they built a high-volume, real-world smart city platform on GCP—with code samples!
When “the cloud” first appeared, it was met with skepticism and doubt. “Why would anyone pay for virtual servers?” developers asked. “How do you control your environment?” You can’t blame us; we’re engineers. We resist change (I still use vim), and believe that proof is always better than a promise. But, eventually we found out that this “cloud thing” made our lives easier. Resistance was futile.
The same resistance to change happened with git (“svn isn’t broken”) and docker (“it’s just VMs”). Not surprising — for every success story, for every promise of a simpler developer life, there are a hundred failures (Ruby on Rails: shots fired). You can’t blame any developer for being skeptical when some random “bloke with a blog” says they found the next great thing.
But here I am, telling you that serverless is the next great thing. Am I just a bloke? Is this a blog? HECK YES! So why should you read on (other than for the jokes, obviously)? Because you might learn a thing or two about serverless computing and how it can be used to solve non-trivial problems.
We developed this enthusiasm for serverless computing building a smart city platform. What is a smart city platform, you ask? Imagine you connect all the devices and events that occur in a city to improve resource efficiency and quality of citizen life. The platform detects a surge in parking events and changes traffic lights to help the flow of cars leaving downtown. It identifies a severe rainstorm and turns on street lights in the middle of the day. Public trash cans alert sanitation when they are full. Nathan Fillion is spotted on 12th street and it swarm-texts local citizens. A smart city is a vast network of distributed devices (IoT City 2000!) streaming data and methods to easily correlate these events and react to them. In other words, it’s a hard problem with a massive scale—perfect for serverless computing!
|In-ground vehicle detection sensor|
But before we go into a lot more depth about the platform, let’s define our terms. In this first article, we give a brief overview of the main concepts used in our smart city platform and how they match up with GCP services. Then, in the second article, we’ll dive deeper into the architecture and how each specific challenge was met using various different serverless solutions. Finally, we’ll get extra technical and look at some code snippets and how you can maximize functionality and efficiency. In the meantime, if you have any questions or suggestions, please don’t hesitate to leave a comment or email me directly (email@example.com).
First up, domain-driven design (DDD). What is domain-driven design? It’s a methodology for designing software with an emphasis on expertise and language. In other words, we recognize that engineering, of any kind, is a human endeavour whose success relies largely on proper communication. A tiny miscommunication [wait, we’re using inches?] can lead to massive delays or customer dissatisfaction. Developing a domain helps assure that everyone (not just the development team) is using the same terminology.
A quick example: imagine you’re working on a job board. A client calls customer support because a job they just posted never appeared online. The support representative contacts the development team to investigate. Unfortunately, they reach your manager, who promptly tells the team, “Hey! There’s an issue with a job in our system.” But the code base refers to job listings as “postings” and the daily database tasks as “jobs.” So naturally, you look at the database “jobs” and discover that last night’s materialization failed. You restart the task and let support know that the issue should be resolved soon. Sadly, the customer’s issue wasn’t addressed, because you never addressed the “postings” error.
Of course, there are more potent examples of when language differences between various aspects of the business can lead to problems. Consider the words “output,” “yield,” and “spike” for software monitoring a nuclear reactor. Or, consider “sympathy” and “miss” for systems used by Klingons [hint: they don’t have words for both]. Is it too extreme to say domain-driven design could save your life? Ask a Klingon if he’ll miss you!
In some ways, domain-driven design is what this article is doing right now! We’re establishing a strong, ubiquitous vocabulary for this series so everyone is on the same page. In part two, we’ll apply DDD to our example smart city service.
Next, let’s discuss event-driven architecture. Event-driven architecture (EDA) means constructing your system as a series of commands and/or events. A user submits an online form to make a purchase: that’s a command. The items in stock are reserved: that’s an event. A confirmation is sent to the user: that’s an event. The concept is very simple. Everything in our system is either a command or an event. Commands lead to events and events may lead to new commands and so on.
Of course, defining events at the start of a project requires a good understanding of the domain. This is why it’s common to see DDD and EDA together. That said, the elegance of a true event-driven architecture can be difficult to implement. If everything is a command or an event, where are the objects? I got that customer order, but where do I store the “order” and how to I access it? We’ll investigate this in much more detail in part two of this series. For now, all you need to understand is that our example smart city project will be defining everything as commands and events!
Now, onto serverless. Serverless computing simply means using existing, auto-scaling cloud services to achieve system behaviours. In other words, I don’t manage any servers or docker containers. I don’t set up networks or manage operation (ops). I merely provide the serverless solution my recipe and it handles creation of any needed assets and performs the required computational process. A perfect example is Google BigQuery. If you haven’t tried it out, please go do that. It’s beyond cool (some kids may even say it’s “dank”: whatever that means). For many of us, it’s our first chance to interact with a nearly-infinite global compute service. We’re talking about running SQL queries against terabytes of data in seconds! Seriously, if you can’t appreciate what BigQuery does, then you better turn in your nerd card right now (mine says “I code in Jawa”).
Why does serverless computing matter? It matters because I hate being woken up at night because something broke on production! Because it lets us auto-scale properly (instead of the cheating we all did to save money *cough* docker *cough*). Because it works wonderfully with event-driven architectures and microservices, as we’ll see throughout parts 2 & 3 of this series.
Finally, what are microservices? Microservices is a philosophy, a methodology, and a swear word. Basically, it means building our system in the same way we try to write code, where each component does one thing and one thing only. No side effects. Easy to scale. Easy to test. Easier said than done. Where a traditional service may be one database with separate read/write modules, an equivalent microservices architecture may consist of sixteen databases each with individual access management.
Microservices are a lot like eating your vegetables. We all know it sounds right, but doing it consistently is a challenge. In fact, before serverless computing and the miracles of Google’s cloud queuing and database services, trying to get microservices 100% right was nearly impossible (especially for a small team on a budget). However, as we’ll see throughout this series, serverless computing has made microservices an easy (and affordable) reality. Potatoes are now vegetables!
With these four concepts, we’ve built a serverless sandwich, where:
And finally, serverless is having someone else make the sandwich for you (and cutting off the crust), running components on auto-scaling, auto-maintained compute services.
As you may have guessed, we’re going to have a microservice that reacts to every command and event in our architecture. Sounds crazy, but as you’ll see, it’s super simple, incredibly easy to maintain, and cheap. In other words, it’s fun. Honestly, remember when coding was fun? Time to recapture that magic!
To repeat, serverless computing is the next big thing! It’s the peanut butter and jelly sandwich of software development. It’s an uninterrupted night’s sleep. It’s the reason I fell back in love with web services. We hope you’ll come back for part two where we take all these ideas and outline an actual architecture.