Last week we attended QCon London 2016 (March 7-9) run by the InfoQ folks and held at the Queen Elizabeth II Conference Centre in Westminster. The conference ran for three days (followed by two days of workshops) and was targeted at software development professionals, particularly at software architects and CTOs.
Each day of the conference was organised into a set of different 7 tracks (6+1 sponsored track). The tracks covered topics that the organisers believe are either in the innovators category, like Unikernels, or in the early adopters phase, like microservices and containers. Topics covered across the three days included: containers in production, microservices, DevOps and Continuous Integration/ Continuous Delivery (CI/CD), stream processing, security, architecting for failure, and many more. Each track had five talks and one open space session that consisted of a facilitated session where participants set the agenda around the particular track topic.
Overall, the quality of the talks we attended over the three days was very high (we rate around 85% of the talks green – meaning that they exceeded expectations; rating a few yellow; and one red (not telling!). The conference really felt like it organised for developers – in fact, they use Sprints when organising events. Of course they do. Only issue, a bit of an annoyance, was that connectivity was very poor in larger rooms when packed – imagine, we couldn’t respond to Slack messages instantly.
In this blog, we’ll highlight some the best talks from day one with follow ups for days two and three. The tracks for day one were: ‘Back to Java’ , ‘Stream processing @ scale’, ‘DevOps & CI/CD’, ‘Head-to-tail functional languages’, ‘Architecting for Failure’, and ’21st Century Culture from Geek on the Ground’. How to choose between them? Given where we’re at Sandtable and our interests, we gravitated towards the DevOps & CI/CD track.
The opening keynote was given by Adrian Colyer from Accel Partners. Adrian writes a daily blog called ‘The Morning Paper’, in which he comments on interesting and influential papers from computer science (350 and counting). It’s a great resource. His talk was titled ‘Unevenly distributed’ and was about why we (as practitioners) should want to read more academic papers (I agree, by the way). The title is taken from William Gibson’s quote: ‘The future is already here — it’s just not very evenly distributed.’ Adrian went on to give five good reasons why we would want to read more. Firstly, ideas in papers provide good thinking tools, and sometimes counterintuitive (or counter-trend?). Secondly, reading papers can help raise your expectations about what is possible – again sometimes counterintuitively. He gave an interesting example from Microsoft, where it was discovered that for a particular system it was possible to do less testing (which comes with a cost) and not sacrifice quality. Next, papers also contain many useful lessons from the field, in particular for distributed systems as large tech companies (working at huge scales) are publishing about and open-sourcing their technology. Google’s papers about running machine learning systems at scale are good examples. Next, Adrian drew a comparison with papers being a part of ‘The Great Conversation’ much like the Western canon of books. Through exposing to a broader set of ideas (problems and solutions), it is possible to make potentially fruitful connections between areas. Sign me up. Finally, reading papers can give you insight to what technologies are to come, what is on the edge – the future – exciting technologies like: HTM; Persistent Memory NI; NVDIMMs; RDMA; NVMe; and more. Look ’em up. Message received, Adrian; we’ll try to read more papers.
As we said, for day one, the CD track looked most interesting to us. The first talk of the track was given by Lianping Chen from Paddy Power on lessons learnt from moving to CD. Lianping gave a great overview of the reasons for using CD and also discussed some hard-earned lessons from migrating to CD. Lessons included make deployment boring (and easy); use lots of automation (automate the boring stuff!); align test and production environments (sounds good); and deploy in small batches to reduce deployment risk. Ultimately, employing CD is about reducing cycle time (user stories to production), so that you can get feedback from users as quickly as possible. And this is key, of course. After all, he said, requirements are “not requirements, but hypotheses.” Get feedback from users ASAP, then formulate new hypotheses. Rinse and repeat. Paddy Power reduced their cycle time from months to weeks or even days. You might be thinking: “Why are we not all using CD?”. Good question. Lianping did note, however, that it took four years for Paddy Power to migrate to CD. Ouch. It is an all-in approach; you must embrace it holistically. Go on. You know you wanna.
Lianping also made some interesting points about testing. Testing is not just about tests – but also design. You must design for good testing. Test-driven development (TDD) is key. Their testing approach, a combination of TDD, Acceptance-test DD, and Behaviour DD, was so successful that they now have no need for a bug-tracking system. Some helpful rules for developers are: fix broken tests now; aim for very high test coverage (like 95%!); eliminate flaky (non-deterministic) tests; and treat tests as first-class citizens: they get reviewed as does any code. Sounds good; let me just tell the team.
After lunch, Peter Thorngren from Volvo Trucks gave a great talk on their approach to the development of software running on Volvo trucks. Their trucks are running more and more software, and becoming smarter and smarter. Peter opened with this cute advertisement about four-year old Sophie, a test driver of Volvo trucks. She’s definitely enjoying her job.
Volvo have an interesting setup because the cost of testing software on trucks is prohibitively high, as you might expect. Because of this, a lot of testing is done using simulation, and more recently combining virtualisation with real trucks. That is, parts of the truck are virtualised for running tests but some signals come from real trucks, to create more realistic conditions. Applying CI/CD has brought down the cycle time for new software features from weeks to minutes.
Peter also speculated that there could be fully autonomous trucks by 2030. It’s going to be fascinating to see how this progresses next to passenger cars, e.g. Google self-driving cars. Oh, and Peter said maybe, possibly, the truck software platform could be open-sourced — imagine hacking on a Volvo truck platform. Exciting!
In the afternoon, we attended an excellent talk by Sam Adams, ‘CD at LMAX: Testing into production and back again.’ LMAX write software for FX and have customers such as the London Foreign Exchange. They were a very early adopter of CD (we’re talking like a decade ago; the CD book come out in 2010), and hence their approach to testing and deployment is very advanced. So it was exciting to hear about it – a masterclass in testing for CD, if you will. It is worth remembering, however, that the LMAX software is dealing with financial transactions and you would therefore expect a serious investment in testing and quality.
For a start, LMAX has around 2 million lines of code, which is not particularly high (the Linux kernel has around 12 million); however, the codebase is split roughly 50/50 between tests and functional code. Now, employing the approach they do for testing and deployment, they have a very low rate of production issues (probably a good thing for a FX) – according to Sam it’s an order of magnitude below industry average. They run on bare metal because of performance, so no use of virtualisation or the Cloud for production services. The reason they use CD is for rapid and sustainable delivery, fast feedback, testing and automation, and to focus on quality. Their deployment pipelines reads straight out of the CD book: commit/AT/staging/UAT/prod. However, they run many of the tests out of the core pipeline to allow for faster feedback. This isn’t really surprising when you look at the breadth of their testing: integration tests; performance tests; system reliability tests; static analysis; data migration tests; invariant tests; dependency checking; third-party integrations; even testing-in-live. It’s a testing Xmas list. As for acceptance tests (AT), they have over 12,000 that run across 64 servers and take 22 minutes to complete. ATs are written in a DSL (domain specific language) allowing for maintainability and clarity, and contributed to by Devs, QA engineers, and business analysts. As for the tests themselves, Sam advised to look for natural test boundaries, for LMAX these are entities like users, accounts, instruments, and currencies. Also, tests should have no global state (yeah, defo!), abstract out time (it can be tricky to test asynchronous systems); and it’s a good idea to handle intermittent (or flaky) tests in the DSL.
An interesting testing approach that mentioned was ‘testing-in-live’. Sam explained that as a consequence of good test isolation it’s possible to run tests in live systems. The key here is to make sure you’ve got good isolation (yes!), can handle multi-tenancy, and do not pollute business data. Sounds doable, right.
Sam also discussed feature toggles, which are on our horizon. Feature toggles enable incremental delivery, helping push out small testable units. They also help to reduce deployment friction and avoid feature rushes. Sam discussed three approaches to implementing feature toggles: hard-coding them; using a configuration-based approach; and using permissions and preferences. The last approach is what LMAX use, and what we’ve started thinking about. The idea is to use fine-grained permissions to control who can can use/view features. The advantages of this approach are that the feature toggles are built into the system, it enables the testing of legacy and new behaviour together, and, perhaps, most importantly, the decoupling of ‘deploy’ and ‘release’. Using feature toggles in this way is useful for rolling-out features more easily to, say, groups of users.
Finally, it’s worth remembering that while LMAX have a huge and broad test suite, they employ CD for rapid and sustainable delivery. Ultimately they produce software with high confidence, of high quality, and in fast iterations. Not bad.
The last talk of the day was ‘Acceptance Testing for CD’ given by Dave Farley, co-author of the seminal book ‘Continuous Delivery’. Dave’s talk was packed with good advice, if ATs are your thing. He started with an overview of ATs and moved on to properties of good ATs. Much to learn.
ATs are, of course, for testing whether the code does what a user wants – they provide an automated ‘definition of done’, in Agile/Scrum parlance. ATs are often derived from Acceptance Criteria that are included with User Stories. The idea is that ATs provide timely feedback on stories – closes the feedback loop. We don’t have to ask users, we can use tests as proxies to check if the system does what they want. A good AT is an executable specification of the behaviour of the system. So far, so good.
As for who owns the ATs, well anyone can write them but it’s developer who will break them, so it should be their responsibility to keep them working. Makes sense. An anti-pattern here would be to have separate development and QA teams. Developers own ATs.
“What properties make for good ATs?” I’m glad you asked. Dave says good ATs aim at the “What” not the “How” – focus on the desired behaviour and not on the implementation. Another property is that ATs are isolated from other tests: isolated from the system under test (scoping); isolated from tests cases: many be running many test cases, and avoiding dependencies is a must; and isolated from themselves (over time): make sure to use uuids and to clean up. Tests should also be repeatable: fake out external systems and check state via back-channels. He also suggested that it’s good to use a DSL for writing tests, so that tests can be written in the language of the problem domain. A simple DSL solves many problems: readability; maintenance; separation of concerns; and the ability to abstract different APIs (it’s then possible to use tests across multiple different interfaces). Finally, ATs should test any change, and tests should be deterministic (who wants non-deterministic tests!? Yikes!). Dealing with time in tests can be tricky, so it’s best to control it and treat as external dependency. Finally, of course, ATs should be efficient – especially if you have 10000s of tests! Lots of great properties.
Day one was very focussed on Continuous Delivery. For us, the case for CD has never been clearer. Who doesn’t want rapid and sustainable delivery? However, it has to be backed with a strong test-centric culture. At Sandtable, we’re aiming for CD but have some work to do (quite far to achieve LMAX heights, certainly). We will of course continue to improve; now we certainly know what to aim for.
Stay tuned for days two and three!
Leave a comment
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- June 2015
- May 2015
- April 2015
- March 2015
- September 2014
- August 2014
- June 2014
- May 2014
- April 2014
- March 2014
- November 2013
- September 2013
- June 2013
- May 2013
- September 2012
- June 2012
- May 2012
- April 2012
- March 2012