We all say we like data, but we don’t.
We like getting insight out of data. That’s not quite the same as liking the data itself.
In fact, I dare say that I don’t quite care for data. It sounds like I’m not alone.
It’s tough to nail down a precise definition of “Bad Data.” Some people consider it a purely hands-on, technical phenomenon: missing values, malformed records, and cranky file formats. Sure, that’s part of the picture, but Bad Data is so much more. It includes data that eats up your time, causes you to stay late at the office, drives you to tear out your hair in frustration. It’s data that you can’t access, data that you had and then lost, data that’s not the same today as it was yesterday…
In short, Bad Data is data that gets in the way. There are so many ways to get there, from cranky storage, to poor representation, to misguided policy. If you stick with this data science bit long enough, you’ll certainly encounter your fair share.
To that end, we decided to compile Bad Data Handbook, a rogues gallery of data troublemakers. We found 19 people from all reaches of the data arena to talk about how data issues have bitten them, and how they’ve healed.
You can’t assume that a new dataset is clean and ready for analysis. Kevin Fink’s Is It Just Me, or Does This Data Smell Funny? (Chapter 2) offers several techniques to take the data for a test drive.
There’s plenty of data trapped in spreadsheets, a format as prolific as it is inconvenient for analysis efforts. In Data Intended for Human Consumption, Not Machine Consumption (Chapter 3), Paul Murrell shows off moves to help you extract that data into something more usable.
If you’re working with text data, sooner or later a character encoding bug will bite you. Bad Data Lurking in Plain Text (Chapter 4), by Josh Levy, explains what sort of problems await and how to handle them.
To wrap up, Adam Laiacano’s (Re)Organizing the Web’s Data (Chapter 5) walks you through everything that can go wrong in a web-scraping effort.
Sure, people lie in online reviews. Jacob Perkins found out that people lie in some very strange ways. Take a look at Detecting Liars and the Confused in Contradictory Online Reviews (Chapter 6) to learn how Jacob’s natural-language programming (NLP) work uncovered this new breed of lie.
Of all the things that can go wrong with data, we can at least rely on unique identifiers, right? In When Data and Reality Don’t Match (Chapter 9), Spencer Burns turns to his experience in financial markets to explain why that’s not always the case.
The industry is still trying to assign a precise meaning to the term “data scientist,” but we all agree that writing software is part of the package. Richard Cotton’s Blood, Sweat, and Urine (Chapter 8) offers sage advice from a software developer’s perspective.
Philipp K. Janert questions whether there is such a thing as truly bad data, in Will the Bad Data Please Stand Up? (Chapter 7).
Your data may have problems, and you wouldn’t even know it. As Jonathan A. Schwabish explains in Subtle Sources of Bias and Error (Chapter 10), how you collect that data determines what will hurt you.
In Don’t Let the Perfect Be the Enemy of the Good: Is Bad Data Really Bad? (Chapter 11), Brett J. Goldstein’s career retrospective explains how dirty data will give your classical statistics training a harsh reality check.
How you store your data weighs heavily in how you can analyze it. Bobby Norton explains how to spot a graph data structure that’s trapped in a relational database in Crouching Table, Hidden Network (Chapter 13).
Cloud computing’s scalability and flexibility make it an attractive choice for the demands of large-scale data analysis, but it’s not without its faults. In Myths of Cloud Computing (Chapter 14), Steve Francia dissects some of those assumptions so you don’t have to find out the hard way.
We debate using relational databases over NoSQL products, Mongo over Couch, or one Hadoop-based storage over another. Tim McNamara’s When Databases Attack: A Guide for When to Stick to Files (Chapter 12) offers another, simpler option for storage.
Sometimes you don’t have enough work to hire a full-time data scientist, or maybe you need a particular skill you don’t have in-house. In How to Feed and Care for Your Machine-Learning Experts (Chapter 16), Pete Warden explains how to outsource a machine-learning effort.
Corporate bureaucracy policy can build roadblocks that inhibit you from even analyzing the data at all. Marck Vaisman uses The Dark Side of Data Science (Chapter 15) to document several worst practices that you should avoid.
Sure, you know the methods you used, but do you truly understand how those final figures came to be? Reid Draper’s Data Traceability (Chapter 17) is food for thought for your data processing pipelines.
Data is particularly bad when it’s in the wrong place: it’s supposed to be inside but it’s gotten outside, or it still exists when it’s supposed to have been removed. In Social Media: Erasable Ink? (Chapter 18), Jud Valeski looks to the future of social media, and thinks through a much-needed recall feature.
To close out the book, I pair up with longtime cohort Ken Gleason on Data Quality Analysis Demystified: Knowing When Your Data Is Good Enough (Chapter 19). In this complement to Kevin Fink’s article, we explain how to assess your data’s quality, and how to build a structure around a data quality effort.