In Search of Simplicity
Improving Wunderlist’s sync
For such a young company, only three years old, 6Wunderkinder has a long and somewhat dramatic history. We catapulted into the spotlight with the original release of Wunderlist, garnering positive attention from users and the tech community. Then, according to plan, we refocused from Wunderlist to Wunderkit in an attempt to reinvent project management software.
We (almost) bet the company on Wunderkit, and as many of you know, it didn't turn out too well. We halted development on Wunderkit last year and took from it a whole lot of technical, design, and business lessons.
So when the time came to go back to our roots with Wunderlist 2, there was no room for error. We had to prove to the world, especially our customers, our investors, and ourselves, that we were still the 6Wunderkinder that inspired such passion among the users of Wunderlist. We focused everything we had on Wunderlist 2, and prepared for a massive launch at the end of 2012.
At the same time, we moved from using a cross-platform user interface framework to creating native clients for Mac OS, iOS, Android, Windows, and the Web. We had to rethink how our synchronization worked and had to do it in such a way that it would serve each of those native clients well.
A lot was changing and we were under great pressure to make it perfect. This pressure led to fear. And, we know what fear leads to:
"What if it doesn't scale?"
"What if we need to change it some day?"
That's right. We made the synchronization and back-end of Wunderlist 2 too complex. This complexity lead to performance and scalability problems and issues with list data synchronizing properly. And it lead to downtime, during which clients could not synchronize at all.
When I arrived in Berlin in February of this year, there was a lot of work to do. Like any hubris-filled developer, one of my first reactions was to ask myself, "should we rewrite this?" I've experienced and written a lot in the past about Big Rewrites. They rarely go how you want them to go, and they almost always take longer and cost more than you expect. We chose instead to divide the problem into a series of incremental (though sometimes big) steps. We outlined some of the ramifications of these changes in this post.
So, with fear in our hearts, we set off on a mission to make Wunderlist more scalable, faster, bulletproof, and more efficient. Above all, we set out to make Wunderlist simple.
I have made this longer than usual because I have not had time to make it shorter.
- Blaise Pascal
We've been working really hard over the past few months and have made a ton of improvements. I'm proud of what we have accomplished so far. Not only have we made Wunderlist faster and more scalable, but we've created some really nice technology in the process, especially around how we do low-risk zero-downtime deployments of the API.
This is the first in a series of mostly technical articles in which I'll talk about mistakes we have made, how we've chosen to fix them, how to evolve from a monolithic application architecture to a more flexible one, and how we've handled the everyday challenges of serving millions of users. We want to show our users very transparently that while we have a lot of work to do, we are confidently and methodically improving our systems and infrastructure and have been doing so for months. We also hope that other developers and product teams can learn from our experiences.
You can use this page as a placeholder for the full table of contents as we go. Here are a few topics which may become links in the near future. Expect the list to grow and change as we progress, but this will give you an idea of some of the topics we'll explore.
- Measuring everything
- Deconstructing monolithic code
- Going with what you know - when (and when not) to introduce new technologies
- Our approach to Immutable Infrastructure (ala this article on my personal site)
- Monolith Antipatterns: Abstractions stacked upon abstractions
- Problems and solutions in implementing tiny, heterogenous services
- Deconstructing monolithic data
- Premature optimization - micro- and macro-optimizations
- The risk of idealistic technology choices and how doing the "wrong" thing is often the right choice