When new Safari was just a baby experiment called Safari Flow, we had one product designer working with some excellent freelancers, and we could play fast and loose with our workflow. But as we’ve made the transition to new Safari, the stakes, naturally, have gotten higher. Product managers and designers are faced with the challenge of providing direction and support to a wider array of products, and we need to be much more deliberate about the work we do.
Product design is part strategy, part tactical implementation, and it can be difficult to balance those two different kinds of design work. Good product designers should be able to tackle high-level problems, as well as offer support to the team on day-to-day development happening on the ground.
Much has been written about the friction inherent in integrating thoughtful design work into the heady loop of an agile workflow. Design, by its nature, is up-front, foundational work — you can’t design a house after you’ve built it. (Well, you probably can, but you really shouldn’t.) Agile methodology puts an emphasis on rapid development and constant iteration based on feedback. But running an agile workflow doesn’t mean you shouldn’t actually plan.
In fact, planning is key to a successful agile workflow. Proper planning allows a team to discover many problems earlier than they otherwise would have. It increases the speed and fidelity of communication during handoffs, and increases the predictability of your team. It also makes for stronger, properly contextualized work. One of our jobs as product designers at Safari is to help keep the team centered: to help product managers hew to the spirit of the larger vision and values of our products even as we facilitate the day to day work that developers do.
That all sounds nice and lofty, but it’s useless pablum if it doesn’t work in practice. We strive to run design work through the now-generally accepted method for integrating design work into an agile workflow: we do research and general ideation with product managers during iteration n+2, design, prototype, and refine requirements with devs during iteration n+1, support iteration n, and review/test iteration n-1. Here’s what this looks like, all Gantt chart-y and stuff:
This is an excellent framework to aspire to, but it’s also important to keep in mind that it’s a platonic ideal — things are rarely as orderly and aligned as this Gantt chart implies. At Safari, we attempt to achieve this workflow nirvana. Often we’re firing on all cylinders. But I’m not gonna lie: sometimes we suck at it. Did I ever tell you about that one time two product managers, one project manager, a developer and a contractor all had to take unexpected downtime of one form or another, all at the same time? Hey, it happens. To us, last month. (We did remarkably ok. We even shipped an iOS app.)
But the beauty of a feedback loop is that it’s a loop — every new sprint is a chance for us to do it again, but better. Recently, a bunch of us have been geeking out about process, and we’ve been firming up the workflow around, among other things, design at Safari. Integrating design into agile is tricky business, so I figure I’d share our approach and some of our lessons.
Sprint n+2: Research and Ideation
Everything begins with a problem to solve — be it a user need, a business requirement, a bug, or an improvement. Hopefully we’ve captured that initial problem in a JIRA ticket (and if we haven’t, we do so now). This stage is all about formulating that problem, making sure we understand it well, and coming up with general ideas to pursue. We spend most of this time with product managers, and pull in other people as needed, whenever we have questions or want to run general sanity checks.
Pro tip: pinging your QA and Customer Service people for tactical sanity checks and general bubble-bursting at this stage is incredibly useful; I can’t stress that enough: much love to our QA and CS peeps. They keep us honest like nobody else.
Safari is a partly-distributed company, and designers are — for now — firmly on Team Remote-Workers. But even the most hardcore nomad will tell you that meeting in meatspace from time to time still matters. We’ve found that this phase, in particular, benefits greatly from locking product managers and designers in a room with a great big whiteboard and a problem to solve. It can be a bit challenging, since Peter and Bill work in the Bay Area, and Loz and Eoin are in Europe, but since I’m in New York, I try to go up to Boston on a regular basis to duke it out in person with Adam and Jen (who does not have a twitter account).
By the end of this sprint, we should have some research findings, relatively well-defined tickets with some whiteboard diagrams for user flows, some paper prototypes, or maybe even a static comp attached, and a clear direction (or two) to flesh out.
When writing up tickets, we often divide them up into ‘design’ and ‘implementation’ tickets. This allows us to allocate the work more granularly, and helps reinforce the concept of working ahead of a sprint: in any given sprint, there may be design tickets being worked on that are related to (and often block) implementation tickets scheduled for the following sprint.
Making separate design and implementation tickets does add more noise, and Scott, our Director of Product Services, and I resisted doing it for some time. But eventually we realized that the added noise is an acceptable tradeoff for the amount of clarity doing so brings to our resource allocation. Besides, if a change to the site just involves some CSS, and Loz or I can pop into the codebase and commit the changes directly while working on the design ticket, we then we simply close out the implementation ticket, along with a note to refer to the design ticket, and submit a pull request. Congratulations, these designers ship the codes. We’re 1337 like that. (Do people even say ‘1337’ anymore?)
Sprint n+1: design and prototyping
This is our heads-down phase, where we go away and do some work. In the past, we’ve played it fast and loose in terms of what artifacts we generate: we’re often working in a combination of static comps made with Sketch or actual live prototypes made either by hand or using a tool like Macaw.
Just as we were gearing up to launch the Safari rebrand this summer, I asked Liza to hook us up with a Git repo that continually pushes to an internal server, explicitly so that we could post design work. (It’s pictured here. Sorry, no link — come work with us and you can check it out.) This internal site serves as a central place where we can post our quick and dirty prototypes, as well as place information like our brand guidelines, for everyone across the company to refer to.
As time has gone on, the benefits of working in live code continue to be more and more apparent, so we’ve been moving steadily toward a process where we’re making just a few preliminary static comps (if at all), to working directly in code. This helps increase the fidelity of our work, which increases the efficiency of our communication with developers. (If you’ve ever had a conversation go along the lines of “no, no, I want it to behave like this, not like that,” then you know what I mean.) It also has the added bonus of providing a really good point of reference for our friends in QA.
As a bonus, in the process of making these one-off prototypes, we’ve ended up generating an ad-hoc pattern library. We’re currently figuring out how best to integrate CodeKit into our workflow, in order to build up a more permanent library of reusable styles and partials that we can mix and match when we need to build a new prototype.
Throughout the design process, we’re still refining our tickets in JIRA: asking questions, documenting answers, coaxing bits of missing information from people throughout the company, having
dancefights thoughtful discussions in chatrooms, etc. This results in a ticket description that is increasingly comprehensive and clear, with a comments section that chronicles the thought process that has led us to that description. All accompanied by nice, detailed prototypes that depict exactly the interactions and behavior we want to see in the final implementation.
We also strive to come up with more than one solution to a problem. Sometimes that doesn’t happen—sometimes the answer to a problem becomes obvious quickly, and we can just execute and refine. But more often than not, when we have a couple of avenues to explore we’ll do so first as static comps in Sketch — that allows us to quickly put together user flows to show other people for feedback.
We recently began using LayerVault to store all our Sketch files, and we’ve found their presentations feature to be very useful for putting together quick, err, presentations of user flows. If, after gathering feedback on those, we still have more than one contender, then we move into making prototypes of each variations. During this sprint we should also start talking to our business intelligence humans to figure out how best to test some of the designs we’ve come up with once they’re implemented in the product.
By the end of this sprint, we should be in a good place. Design has been worked out, tickets should be nice and descriptive, and there should be a record of our thought process to date, so that dev and QA can refer to it and get some of their questions answered. We’re also not going anywhere; the job’s not finished.
Sprint n: Support for Dev and QA; Planning All The Things
As we hand things off to our devs to implement, we want to remain involved in the development process, in order to field any questions or react quickly to any unexpected needs. A new image asset here, an uncovered blind alley in our user flows there, a ‘surprise’ new requirement… it happens. Additionally, we want to make ourselves available to our QA specialists, who will no doubt need clarity sometimes when — despite our best efforts — a ticket description is ambiguous, or a discrepancy between prototype and implementation needs sorting out. We also get deep in the weeds with project managers, who are busy planning for future sprints.
Like most agile shops, we work in two-week sprints, and we’ve split our weeks as follows:
Sprint Week 1: Heads down, swimming.
The first week of a sprint is the ‘heads-down’ week. We try to keep our agile sprints synchronized across the company, so we generally kick off our sprints on a Monday, and we try to get as much work done (and attend as few meetings) as possible aside from that. While we want to be heads-down and doing the work during the first week of a sprint, we also keep a low intensity planning discussion going — filling in descriptions and asking questions about tickets in the backlog and in future sprints, discussing general allocation issues, estimating design tickets in the backlog, etc.
Sprint Week 2: Coming up for air, and where to swim to next.
The second week of each sprint is our ‘planning week’. During the planning week, as people are wrapping up their work for the current sprint and closing tickets, we also focus on scheduling — and locking down — the work for the upcoming sprint.
Taking a page from Keith and the Infrastructure team, we also have a points budget for design. We have a set number of points (based on the number of designers available) that we allocate across our products and projects, based on need. By Wednesday or Thursday, the points budget for the upcoming sprint should be locked down. This means that any horse-trading for points should happen sooner in the week than later, if possible.
By the end of the week, we’ve locked down the upcoming sprint (n+1): all tickets in sprint n+1 are estimated, and the tickets allocated to each project in sprint n+1 don’t exceed the points budget for each project. And since we’ve also already started defining work two sprints out (n+2), we can also start sketching out what that sprint is going to look like.
Sprint n-1: testing, and validation
Once the work is out there, we need to measure its effect on our users and make sure we adjust and course-correct as needed.
We do lots of instrumenting and data analysis, which gives us valuable insight into larger behavior patterns and quick feedback on some of our more tactical work. Back during sprint n+1, we worked with our BI peeps to set up the right instrumentation for properly testing our work, so now that we’ve shipped we can start looking at the numbers and seeing how reality squares up to our expectations.
On the product side, product managers, particularly, do lots of customer interviews, which often inform our thinking around a particular feature, or allow us to better manage our priorities based on user needs or complaints.
Testing and validation is an integral part of the feedback loop that underpins agile workflows, and in addition to the work we’re doing now, we’re starting to get to the point where we can spin up a more regimented user-testing program, which can run in sync with the testing and measuring we already do. Armed with these insights, we can take our findings and start the process from the top.