CraftConf 2014 - Day 1 Summarized

Posted on by Tero Parviainen (@teropa)

The CraftConf conference has me back in Budapest for some industrial strength geekery, and what a conference it is! The speaker lineup is what originally attracted me here and it certainly hasn't disappointed. But everything else has just worked as well: The venue is fantastic, the Wifi uncharacteristically functional, the pacing of the schedule just right and the coffee in between sessions good. Kudos to the organizers!

Here's a hopefully semi-cohesive summary of some of the talks I attended today.

Bodil Stokke: Programming, Only Better

Bodil delivered the opening keynote with her by now characteristic style that features Dijkstra's disapproving glare and Pinkie Pie's party cannon in perfect balance.

The Agile manifesto spawned a whole lot of process, much of it now considered silver bullet: Iterative design, pair programming, test-driven development. There's a lot of enthusiasm towards this, which is justified because it works for a lot of people.

But the thing is, people are different and some practices that boost one person's productivity can be detrimental to another's: An extrovert (like Pinkie Pie) may love pair programming but an introvert (like Fluttershy) can't focus with others around. For her pairing is not only detrimental to code quality but also just exhausting.

As an industry, we have no idea what we're doing. This is not science. But this is our best effort. We try these things anyway, but we should at least be agile about it.

Agile sees programming as a matter or process, not tools, but the question of tools (like programming languages) may be even more important. The Out of The Tar Pit paper is all about this. Making code more understandable, and reasonable, by reducing complexity.

State is one of the greatest causes of complexity in programs. Corollary to tech support: You tell people to turn the computer off and on again, because resetting the state often solves the problem. In Java, setters have the tendency to destroy your ability to reason about the code.

Object-oriented programming attemps to solve the issue by encapsulating the state. This helps, since you can enforce internal integrity constraints. But encapsulated state also encapsulates surprises. The state is still there.

Functional programming idealises referential transparency. There is no state, no side effects. Just inputs and outputs. When you look at it more closely though, the state is still there, it's just being passed around. This is better since at least you can see the state now. It is transparent.

There's another kind of complexity though, that of control, that still exists in functional programming. What order do things happen in? What inputs can there be?

Declarative programming is one approach that gets even closer to an ideal language that eliminates both state and control. You just tell it what you want, and it figures out how to get there. Logic programming, exemplified by Clojure's core.logic has some of these characteristics.

"While we may not have the perfect language, maybe we should prefer the one with fewer surprises."

Gojko Adzic: How I Learned To Stop Worrying And Love Flexible Scope

I've never seen Gojko talk before, though I'm familiar with his writing. Still haven't gotten around to Impact Mapping, though people keep recommending it. After this talk it definitely bumped up on my reading list, as will his upcoming free book about improving user stories.

Flexibility is something people pay extra for in other industries. For example, in plane tickets flexibility has a lot of value. You pay extra to be able to change things. Why do people then fight flexibility in software so much? People only become susceptive to flexibility in crisis mode. Why is it not the normal mindset?

There's a real problem in agile in that it's not flexible from a business perspective. We've built flexibility into our process, but it's not flexible to business users.

User stories are part of the problem. As a backlog management tool, they're flawed. When "flexible" means just putting things in and out of the backlog, it is dangerous. People just invent stuff and it goes in ("user story stream of consciousness"). Nothing big ever gets done. Scrum says the stories must be small, must fit in a sprint. These kinds of stories don't have big business impact. They don't really communicate what the business needs.

Uncertainty cannot be avoided. Unpredictable changes will always happen. Business people understand this. The interesting question is: How much is the insurance policy against this worth?

In the book Adapt by Tim Harford there's a discussion of two kinds of plans: Linear and flexible. Linear plans never work. This is because of unexpected things that come from outside our control. They come in three dimensions:

  1. Local - works in one place but doesn't in another.
  2. Time - things happen over time that you didn't plan for.
  3. Human - humans are unpredictable; they will tell you what they want but they actually want something different.

When these dimensions hit our business, we can't control it. Naïve agile doesn't deal with this. Agile has the tools but we need to look at user stories differently.

Principles

Today we do the formulaic "As a [role] I need [thing] in order to [activity]". People argue about the format, but this is disconnected from flexible planning.

What you need is a roadmap. A roadmap is not a backlog in JIRA. Roadmaps should have a concept of variation and a concept of survivability. The hint is in the word: A roadmap is a map of roads. There are options on what roads to take. You can get lost, but then you replan. Or, even better, your GPS does. It flexibly considers options on the fly.

Try to create a GPS for your software. Something that replans things quickly, so you can then start driving and enjoy the ride.

When applied to Variation: Plan to learn. Look at lots of options, figure out what will work. Don't plan for every single thing to succeed. The now popular practice of Design Thinking applies these principles. In it, you go through phases of thinking: Divergent (creating options) and convergent (choosing options). User stories should not be commitments, they should be possible turns on the roadmap.

When applied to Selection: Plan to discard mistakes. You have no idea what will work. Some assumptions will not be true. That's not because anybody's stupid, but because the world is not under your control (remember the problems with linear plans). In user stories there's often no "victory condition". The activity encoded in the story doesn't include one. You should ask what changes when the user story is done. How is the activity done differently after that. Then you can measure success, not by unit tests etc. but by whether the story created behavioral change.

When applied to Survivability: Let's make sure we're always going in the right direction. When things go wrong, that shouldn't kill the project. There should be smaller feedback loops. That is why user stories should be small, not because they need to fit in an iteration. Business people couldn't care less about iterations.

Two concrete techniques: Impact mapping for building a hierarchical map from stories to business goals, and User Story Mapping for moving away from linear backlogs.

Eric Evans: Acknowledging CAP In The Root - In The Domain Model

Eric Evans wrote the original book on Domain Driven Design, which I wasn't a fan of when originally read it about 6 years ago. I re-read it last year and it made much more sense the second time, probably because of those 6 years of experience in between. I'm still really not a DDD practitioner, but I do like to think about things in its terms from time to time. In today's talk Eric connected some of the concepts of DDD to the CAP theorem in a few interesting ways, which gave some food for thought.

Eventual consistency is a way out of availability problems caused by both optimistic and pessimistic locking. You don't really want ad-hoc eventual consistency though, where you have no idea what's going to happen when. You can define consistency on the level of Aggregates. Aggregates function as consistency boundaries. They are always internally consistent. Invariants within Aggregates apply at every transaction commit.

An Aggregate is just an assertion, not a concrete implementation. One of the most important things about a model is the assertions in it. In this case, assertions about consistency and inconsistency. An implementation of the model should fulfill these assertions.

Systems are almost always amalgamations of different parts. An assertion like the one discussed cannot apply across the whole system. This is where Bounded Contexts come in: Within this boundary we know the model applies and we can enforce all these assertions. Outside the rules may not apply. We need to make fewer assertions at context boundaries. Transactional consistency musn't cross context boundaries. When people do this, they pay for the mistake.

Dan North: Jackstones - The Journey To Mastery

Dan North is one of my favorite speakers - not only because what he says tends to make a lot of sense but also because he delivers it so well. I hadn't heard this talk before and really enjoyed it.

What does mastery mean?

You can define it as capability in a context. Capability is being able to do something. That ability takes time to acquire. Context is knowing where to apply the capability.

A concert pianist learns music theory, the mechanics of playing, individual pieces. They practice pitch, chords, scales, progressions, section by section. They engage in a lot of physical and mental repetition. For them, mastery is a constantly flawless performance. This doesn't really translate directly into programming, but could be applied to something like touch typing or using an IDE. Mastering your tools enables you to have zero cognitive load caused by them.

An ice hockey player learns how to skate, the rules of hockey, tactics and techniques, combinations and game plays. Unlike concert pianists, they're part of a team. Some stuff is out of their direct control. They practice individual techniques, team techniques and strategies, offensive and defensive technique. For them mastery is consistently playing at your best. It's the team's success that counts.

A soldier learns discipline (shutting up), survival techniques, decision making under pressure, personal physical and mental boundaries. At pressure you make bad decisions but you don't know it. You need to learn at what point the pressure gets to you. Soldiers practice repetition of basic skills, unfamiliar scenarios. For them, mastery is adapting instinctively to unfolding events. You need to not be thinking how you're going to react. You need to just know.

Mastery applied to software people

Skilled craftsmen (traditional ones, not software) learn as master's apprentices. When they've learned everything they can, they move on to other masters - become journeymen.

As an apprentice, find people who know what you want to do. Stalk them (or follow them on Twitter). Model them, as in "fake it til you make it". At some point you'll notice everyone is faking it to some extent and no one knows what they're doing (impostor syndrome).

You need to be vulnerable and say "I don't know how to do this." We're really bad at this particularly in the west.

Sove real problems. Don't just learn to swim with armbands. A code kata is not a real problem. Learn it first, but then take off the armbands, do something real. There are tons of projects on GitHub that could use your help.

Study the basics. If you don't use practices like XP, TDD, SOLID principles, do learn them first so you'll know why you're not using them.

As a journeyman, build a portfolio. Try different approaches. Try different domains. See which principles that you thought you knew don't apply universally.

Learn how to learn and how to practice. There are different learning modes for different people (kinesthetic, auditory, visual...). Learn yours.

Listen like if you don't know the answer, because you might not. Shut up. If you're right you'll still be right in 5 minutes. If you're wrong, you'll look like an idiot.

If it ain't broke, fix it anyway. Try different kinds of languages. Solve the same thing in different ways. You might come up with a nicer solution, but also you'll be learning a new way to apply this to other problems.

As a master, remember where you started. Share your toys. Remember what it felt like. The best programmers make time. Let the learner make the discoveries.

Douglas Crockford: The Better Parts

Douglas Crockford is the discoverer of JSON and one of the high priests of JavaScript. I really enjoyed watching his Crockford on JavaScript lectures and was looking forward to this talk. It didn't disappoint, and was really in some sense a continuation of the "on JavaScript" lectures.

"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." - Antoine de Saint-Exupéry

This quote on design is often borrowed to all kinds of things in design, architecture, and engineering. Also in software. Software has a special need for perfection and this quote gives us guidance, toward perfection by subtraction.

Avoiding the bad parts of programming languages reduces the likelihood of mistakes. If a feature is useful but also sometimes dangerous, if there's a better option then always use the better option. We are not paid to use every feature of the language. We are paid to write programs that work well and are free of error.

A good programming language should teach you. Accumulating language knowledge makes you a better programmer in general. There are lots of people that come from universities only knowing one language. This is sad. Programming languages are our most fundamental tools and we should learn lots of them.

When Crockford first started with JavaScript, he made every mistake you can make. That was because he didn't bother to learn it, since it looked like something he knew already. The language responded brutally. (I think many people can relate to this when it comes to JavaScript.)

Most JavaScript shooling for Crockford came through JSLint, the tool that lets you know when there's something wrong in a JS program. The Good Parts book also came from this. Those same good parts are still the good parts, the book is not obsolete.

Arguments often heard against The Good Parts:

It is not only possible to write good programs in JavaScript. It is necessary. Because the language is so flimsy, you have to.

JavaScript was originally intended to be a language for beginners. The syntax came from Java (and further from C and B), which has semicolons. The rules for semicolons are slightly complicated. Brendan was concerned people could not manage it, and added automatic semicolon insertion. He had 10 days and made mistakes.

There are two things in software that are difficult to estimate: A) The time it takes to write the code. Hard to guess, often wrong by a factor of 2 or even 10. Very rarely right. B) The time it takes to make the code work right. Ideally 0, but often bigger than A. Sometimes infinite - it may never work right. Anything that saves time in A that adds to B is a terrible mistake. Always take the time to code well.

Good Parts Reconsidered

In the book, did not recommend new. Considered a Javaism and a hindrance on learning. Recommended Object.create instead. Has since stopped using Object.create because stopped using this. This was due to research in safe JS subsets that had problems with this. Really liked that dialect and started using it.

No longer using for or for..in loops. Replaced with ES5 array methods, such as forEach. Proper tail calls in ES6 will eliminate the need for while.

The Next Language

The one to replace JavaScript. Google says Dart, Microsoft says TypeScript. Both are pushing us backwards. They don't respect what JavaScript got right. JavaScript is succeeding because of the functional stuff.

It took a generation to agree high level languages were good. It took a generation to agree goto was bad. It took a generation to agree objects were good. It took two generations to agree lambdas were a good idea. It's only now that it's getting some traction. The next language will also be dismissed.

Classical vs. Prototypal Inheritance

Of the two, prototypal is the more powerful. You can easily simulate classical in prototypal, but not vice versa.

Classical is all about classification and taxonomy. Gets really complicated. You usually also do it at the beginning, when it's most likely you'll get it wrong.

Crockford is no longer a fan of prototypal inheritance. Its principal value was in memory conservation. Moore's law has made that a waste of time.

Prototypal inheritance causes confusion between own and inherited properties. This sometimes causes bugs. Not always, but it's a source of error we don't need.

Another big problem in prototypal inheritance is retroactive heredity. You can choose what an object inherits after it's been created. This inhibits performance. Engine makers make engines faster by making assumptions about the shape of objects. They need to be pessimistic about inheritance because it may change.

The big innovation of JavaScript was not prototypal inheritance, but class-free object oriented programming.

Know Your AngularJS Inside Out

Build Your Own AngularJS

Build Your Own AngularJS helps you understand everything there is to understand about AngularJS (1.x). By creating your very own implementation of AngularJS piece by piece, you gain deep insight into what makes this framework tick. Say goodbye to fixing problems by trial and error and hello to reasoning your way through them

eBook Available Now

comments powered by Disqus