Interviews

Hog Bay Software: Jesse Grosjean

image with no caption

Company: Hog Bay Software

Position: Founder

Background: Jesse Grosjean is the one-man show behind Bangor, Maine–based Hog Bay Software. His software includes the popular distraction-free writing tool WriteRoom and the simple “paper-like” to-do list application TaskPaper, both of which exist on the Mac and iPhone.

Link: http://www.hogbaysoftware.com/

Ken: What’s the most important step when designing a new application?

Jesse: I always start each product idea with a positioning statement following Geoffrey A. Moore’s format:

For [target end user] who wants/needs [compelling reason to buy]. [Product name] is a [product category] that provides [key benefit]. Unlike [main competitor], [Product name] [key differentiation].

This sets a foundation for the app that I use throughout the development process, and it provides the basis for marketing the app after release. Here are the positioning statements for TaskPaper and WriteRoom as examples:

TaskPaper: For Mac & iPhone users to make lists and stay organized. TaskPaper is a simple to-do list that’s surprisingly adept. Unlike standard organizers, TaskPaper gets out of your way so that you can get things done.

WriteRoom: For Mac & iPhone users to write without distractions. WriteRoom is a full-screen writing environment. Unlike the cluttered word processors you’re used to, WriteRoom lets you focus on writing.

Ken: You now have an active community of people willing to test your software and apps, but “back in the early days” when that wasn’t the case, how did you go about getting customer feedback? How would you suggest developers do that when dealing with iPhone apps and Apple’s App Store?

Jesse: Make sure that you start with an interesting idea that you can describe (positioning statement). Once you have that, then it shouldn’t be hard to get feedback. You don’t need lots of users, just three or four different conversations going on about the app as you build it. A good positioning statement is important at this stage so that you stay focused on the goal for 1.0 instead of just adding feature upon feature suggested by your early testers.

Ken: What guidance do you provide your customers when they test beta software? Do you script them in any way about what they should be testing or just let them have at it?

Jesse: I just let them have at it. Generally, I include a few notes of changes to look for in each new beta, but that’s about it. I do it this way because I don’t think that I personally would be organized enough, or have enough time, to create and maintain any more structure in the testing process.

Ken: How do you interact with your testers and community? Are there tools you use to manage those interactions? Do you use different tools internally for yourself versus externally for testers?

Jesse: Conversations can happen in my user forums, blog comments, or email. Recently (for iPhone beta testing) I’ve tried to stick more to email with all testers Bcc’d.

Ken: When dealing with bugs in particular, what do you request your testers provide you in order to more quickly resolve them? What types of bugs do your testers usually find that you miss in your own QA?

Jesse: Bugs need a reproducible case. I’ll look for bugs that are not reproducible, but that’s almost always a losing battle. In general, testers find bugs of all sorts, but for me I think a more useful part of the beta process is the feedback that I get about how the application actually works. Most of the conversation is about features to add/remove/change as opposed to bugs to fix.

Ken: Once you have feedback from your customers, how do you determine what features get added, removed, or changed in the first version of the App Store release? In particular with the gated App Store approval process, how do you determine what goes into the first release versus an update?

Jesse: I think a good rule of thumb is to release 1.0 as soon as the app’s functionality fulfills the app’s positioning statement. For subsequent updates, I just release when it seems right...balancing the desire to implement everything on my to-do list with the desire to get the release out as soon as possible. Generally, my release pattern is to do a big release with lots of changes, followed by a number of quick releases that fix bugs or polish behavior. Then not much [is released] until the next big release.

Mariner Software: Michael Wray

image with no caption

Company: Mariner Software

Position: President

Background: Michael Wray began his career in the Macintosh software industry by co-founding Power On Software in the early 1990s. In 2002, he took over the helm of Mariner Software. More recently, he brought Mariner into the iPhone and, ultimately, the iPad markets with offerings on the App Store.

Link: http://www.marinersoftware.com/

Ken: You now have an active community of people willing to test your software and apps, but “back in the early days” when that wasn’t the case, how did you go about getting customer feedback? How would you suggest developers do that when dealing with iPhone apps and Apple’s App Store?

Michael: Name any era in Macintosh computing and one thing remains consistent: Mac users love to volunteer to test software. In our case, whether it was early on in System 7 days, the release of Mac OS X, to today, with the iPhone and iPad generation, Apple die-hards are eager to test new technology. Many users reached out to us years ago and have remained faithful testers even today.

Here at Mariner we make it real easy to apply to be a tester: we put a blurb on our home page on our website, post a notification in our forum, and send out newsletters reminding users they can “steer the Mariner ship” by participation in testing new products. With iPhone testing, as odd as it sounds, we literally have to be picky on who we can accept as a volunteer tester. Since Apple limits the amount of participants in ad hoc testing, we need to make sure whoever is participating is actually serious about testing.

Ken: What guidance do you provide your customers when they test beta software? Do you script them in any way about what they should be testing or just let them have at it?

Michael: When an app is in the early stage of testing, we try to set the expectations for the user. We are fortunate to have hundreds of testers to choose from that have strong beta testing experience, and thus, know what to expect. Obviously the words “PLEASE MAKE SURE YOU HAVE A BACKUP SYSTEM IN PLACE” are littered all over the documentation. The last thing we want is to be responsible for losing a user’s data (fortunately, it hasn’t happened to us). In terms of a script or path for users to follow, we rarely give them that direction. Having users use our software in their own environments and workflow is incredibly crucial for us. We have a pretty good testing lab, but there’s no way we would ever be able to duplicate what users see on an everyday basis.

Ken: How do you interact with your testers and community? Are there tools you use to manage those interactions? Do you use different tools internally for yourself than the ones you use externally for testers?

Michael: We have a private (password-protected) forum where we communicate with our testers (and they with us and other testers). We have also implemented a support/knowledge base solution called Tender that is really helping with communication.

Ken: When dealing with bugs in particular, what do you request your testers provide you in order to more quickly resolve them? What types of bugs do your testers usually find that you miss in your own QA?

Michael: We encourage them to provide us with a bug report, so we can see exactly what was happening at the time of a crash. Providing in-depth background info for our QA folks is so critical to reproducing a problem. When we need to go back and forth with a few emails to get that information, the solution is usually delayed. Our testers usually find oddball bugs that happen in small quantities. That being said, there have been occasions where we missed something that I’m sure our users were scratching their heads over.

Ken: Once you have feedback from your customers, how do you determine what features get added, removed, or changed in the first version of the App Store release? In particular with the gated App Store approval process, how do you determine what goes into the first release and what goes into an update?

Michael: For every first-version app in the App Store, we have been able to draw from a close group of power users (as well as internally). Honestly, a lot of it comes down to supply and demand. If we get enough “demand” for a feature from several power users we poll (and it doesn’t take a year to implement!), we usually “supply” it. In terms of what makes the cut in version 1.0 of an app, many times it’s bandwidth-driven. If at that time, we have development bandwidth available, we usually can add more features in version 1.0. If our development team is juggling several different products and they are under major deadlines, the reality is that sometimes features will slip to a later release. It’s not ideal, but we do the best we can with the resources we have.

Get App Savvy now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.