The contributor for this chapter is David Madigan, professor and chair of statistics at Columbia. Madigan has over 100 publications in such areas as Bayesian statistics, text mining, Monte Carlo methods, pharmacovigilance, and probabilistic graphical models.
Madigan went to college at Trinity College Dublin in 1980, and specialized in math except for his final year, when he took a bunch of stats courses, and learned a bunch about computers: Pascal, operating systems, compilers, artificial intelligence, database theory, and rudimentary computing skills. He then worked in industry for six years, at both an insurance company and a software company, where he specialized in expert systems.
It was a mainframe environment, and he wrote code to price insurance policies using what would now be described as scripting languages. He also learned about graphics by creating a graphic representation of a water treatment system. He learned about controlling graphics cards on PCs, but he still didn’t know about data.
Next he got a PhD, also from Trinity College Dublin, and went into academia, and became a tenured professor at the University of Washington. That’s when machine learning and data mining started, which he fell in love with: he was program chair of the KDD conference, among other things. He learned C and Java, R, and S+. But he still wasn’t really working with data yet.
He claims he was still a typical academic statistician: he had computing skills ...