H3_headshot2Hello and welcome! Since you were likely referred one of my public profiles, you might have an idea of my skills and background. However, there’s more to my story than you’ll find on LinkedIn or online resume listings. Over the past few years I have returned to my original passion – big data, artificial intelligence and related areas in data science. I built this site to showcase anonymized and abstracted examples of that work along with personal/side projects in mobile and IoT. As these projects mature I’ll extend this portfolio and add entries with code excerpts and other details about each project.

Aside from posting my updated CV here, I also share some details about my background that you won’t find elsewhere.

View Fullscreen

My Time with Moog & Accidental Machine Learning

H3_Moog_Dave_1994
(L-to-R) Myself, Bob Moog and Dave Perkins (my first hire) at Big Briar – Asheville, NC

My professional history begins at Big Briar, a company started by electronic music titan Bob Moog before he reclaimed the “Moog Music” company name several years later. But prior to that I was one of Bob’s university students, where I had my first brush with machine learning. Before I learned that Bob had joined the world of academia, I had spent two years studying composition and theory in a conservatory environment. But when I got wind that Bob (then “Dr. Moog”) had become the Research Professor of Music at the University of North Carolina, Asheville – practically in my back yard – I transferred to study electronic music “from the wires up” on a formal basis. At the time he joined the faculty there, the plan was to build a computer music research facility to rival those of nationally well-known programs. And more broadly, the audio engineering program at UNC-A was gaining a very good reputation, both in its creative and technical components to the curriculum. On the technical side there were hardware and software engineering requirements, and I took full advantage of both. Because of that I was always around the electronic music and computer science labs. When it came time to find an academic sponsor for my final project, I immediately thought of Bob as the perfect fit. Initially he declined, not only because of his other schedule demands, but also because of the nature of the project. He believed that a computer science professor would be a better fit. I argued otherwise, due to the music component to the project being more challenging than the computer engineering element. He eventually agreed. This could have been because of Bob’s personal connection to the work of Leon Theremin, a contemporary of and collaborator with a figure central to my research. It also could have been because I was volunteering my time to help him complete his article for the Encyclopedia of Applied Physics, and he thought it would be fair to return the favor. Whether it was personal interest or a mild guilt trip, I was ecstatic to have him on board. Not only was it an easy way to recruit other faculty members to pitch in when I dropped his name, but he later became the linchpin to turning the conceptual design on its head. That critical moment led to the project’s eventual (measured) success.

The research centered on the early theoretical work of Russian composer and physicist, Joseph Schillinger. In the 1920s and 30s he developed a mathematical system of music composition. His method later became well-known through his students, who were among the most popular American composers of that era. I became familiar with Schillinger and his system before my time at UNC-A, as a composition student of Dr. David Berry at the Petrie School of Music. Shortly before I transferred, he gave me a two-volume set of Schillinger’s correspondence courses along with a few other texts he had written. In many ways it foreshadowed electronic music-making that producers take for granted today, but it also had ideas and concepts that I had never seen before. I was mesmerized, and even though I didn’t immediately grok all of what the system had to offer, it struck me that Schillinger’s full method could take on new dimensions in a computing environment. It wasn’t until I had several years of technical schooling under my belt (and a few years wrapping my head around Schillinger’s thesis) that I felt comfortable delving into the mechanics of an application.

Original chart by Joseph Schillinger graphing J.S. Bach’s Invention no. 8 in F Major

In the summer before my final year at UNC-A I applied for and received a research grant for the project. A large part of the success of that grant application was setting a target that seemed achievable. The dilemma was to balance an example with “useful” complexity while also being simple enough to complete within the time window for the project. In short, I wanted to produce something between “Hello World” and IBM’s Deep Blue – within a few months. Musical styles vary widely, and Schillinger’s system claims to encompass all of them. So, the challenge was to show enough of the system to effectively demonstrate its use in a computing environment without getting lost in the weeds. And further, the demonstration had to be apparent to non-musicians (and to non-computer-scientists) without too much prompting. I decided on the fugue as a primary subject, starting with the works from J.S. Bach’s “Well Tempered Clavier”. From a general music theory perspective, the fugue already starts with a constellation of fairly well-understood rules. So mapping those tendencies into patterns seemed like a relatively easy “leap” to make. Another advantage is that it’s recognizable even to non-musicians – most people know a fugue when they hear it, even if they don’t know a specific composer or work by name. (I often use the song “Row Row Row Your Boat” as a base conceptual example – even though it’s not a fugue from a music theory standpoint.) And finally, when I saw a Schillinger chart of a famous Bach invention, I thought it would be to my advantage to connect the application to his early demonstrations.

By those measures I had what I thought was a relatively attainable target. I had previously analyzed some of Bach’s fugues when studying music theory, so I felt like I had a conceptual start on the process. I began with definition of container classes for the various musical structures and loaded MIDI file data into a small in-memory data set, each containing several closely-related works from the WTC. I then processed those structures through various permutations of the analyzed structures (I called them “Schillinger rules”) to modify the themes and developments by greater and lesser degrees. The resulting “secondary” structure was then played, each as its own piece, to audition each permutation as a new work. The results of those new pieces ranged from sounding like the original Bach work – with occasional “wrong notes” – to a poorly-schooled student that simply assembled a series of unrelated ideas. It was not what I expected, but as they say, it was a finding. Bob suggested that I sit down with a computer science professor to provide some feedback. The critique I received was as un-musical as my initial results.

“This looks like a narrow vector field reconstruction – a pretty weak one. And to be honest you’re barely passing it enough data to qualify.”

Ouch. I also remember it (or more honestly I specifically recall my embarrassment) like it was yesterday, as I still have a clear mental picture of everything that was going on in the room at the time. To my right was a professor building a ray tracing application on a SPARC system. Behind and to my left were atmospheric sciences majors working on a weather forecasting algorithm that fed on data from the International Climatic Data Center – also based in Asheville. I was the only person in the room that didn’t understand the terms he used, but I certainly understood what he meant. Fortunately I was also certain that any insult was purely unintentional. This professor had a reputation for being particularly direct, and I appreciated his candor and brevity – the primary reason I asked for his input. Fortunately everyone else in the room was so engrossed in their own work that no one noticed the heat radiating from my face. Still, I did my best to hide it by pointing my nose into my notebook and feverishly took notes as he continued to deconstruct my work.

This is what anyone would call humble beginnings. I sat down with Bob to go over the notes from that early review. I watched Bob pivot in his chair and gaze out the window as he digested what I had just read to him. I’m not sure how long the room was quiet, but it felt like an eternity had passed before he spoke again. “Have you thought about turning the conceptual model around?” which I didn’t follow. He explained that the process would be to fully analyze and generate all possibilities, starting with the opening theme. The analog (!) that he used was the process of troubleshooting an electronic circuit. The approach there is to separate the circuit into logical sections and solve for one area before moving on to the next. And when connecting them to each other look at how the later circuit connected to the previous, and ensure that it “connects back” properly. I had taken a few classes in electronic circuit design, but hadn’t made that logical leap until Bob pressed the point. And to be honest, it didn’t really make sense to me at first blush. He further reasoned that this is was what composer’s actually do in the musical domain. And that was a relatable idea to me – knowing the eventual cadence or “landing chord” at the end of a phrase – and making sure the melody and harmony arrived at the right time in the correct register. That became my conceptual hook, but the implications were daunting. It would break down my initial concept for one rule guiding the work – a precept which I presumed to be central to Schillinger’s thesis. I argued that generating distributed data sets, and then using Schillinger’s rules to audition and select them was the direct opposite of his original intent. Bob then made a point that would eventually change the way I viewed Schillinger in specific and computing in general. He said I was forcing Schillinger to be a prescriptive system, but Schillinger and his students use it both de-scriptively and pre-scriptively. His point was that Schillinger’s students used the system both for analysis and for generating new ideas, and that I should model that behavior as much as mimicking the mathematical permutations of his system. I was still resistant to the idea, arguing that “a shotgun approach” would invalidate Schillinger’s method. Bob then said something that brought me around to the idea:

“If Joseph Schillinger was alive today – with all of the technology and tools at his disposal – do you think he would at least try this approach, or do you think he’d stay with graph paper and pencil?”

I was both excited and daunted by the implications of that rhetorical question. It meant loosening my (naive, dogmatic) concept of Schillinger’s system. It also meant starting over. But I was stuck – and this was a brand new idea that seemed to have several conceptual underpinnings I hadn’t considered before. I began (again) with re-analysis of Bach’s work – looking at each theme, variation and transition as it’s own “Schillinger rule”. That would be read as a pattern by the application, and all permutations of that pattern (according to Schillinger’s system) would be generated to create new “candidate” thematic material. But this time they wouldn’t propagate to variations and other elements “downstream” in the piece. The “rules” from the subsequent section of the work would be read in, and those rules would be used to generate a similarity rank against all of the permutations of the original theme. It was in-effect creating a self-training model, and it had the desired effect of taming some of the wider values generated by a strict mathematical propagation of musical pitches and durations. From that “reverse imposition” of rules a ranking system developed, with higher-ranked permutations given preference to those that didn’t match the variations down-stream in the time line.

That was the good news. The bad news was that the process was slow – really slow. I was stepping out of the application to audition highly-ranked (and some low-ranked) versions to determine which ones sounded better “to my ear”. I decided to grade them separately myself, adding a new “perceptual rank” for each generated phrase. It felt like things were trending in the right direction musically, but the wrong direction time-wise. On a creative/compositional level, I knew that I could write counterpoint by hand faster than this process was allowing. That in and of itself ran counter to the claims Schillinger made about his system. Another problem was that it was taking a great amount of calendar time to get these results. Eventually I had to present my findings, and submit the work as part of completing my degree. But I was committed to this approach, as it certainly gave more useful results, if only in bits and pieces.

Initial note object grouping in Bach’s Fugue in F# major

The concept was to eventually reduce the stop-and-audition cycles (i.e. my auditioning and applying “perceptual ranks”) as the layered rule sets and ranks made better choices without my intervention. And it eventually became apparent that progress was being made. With the persistence of generated themes alongside the rules that created them (and the respective ranking), an ersatz semantic layer was created that would make better choices as the history continued to grow and more sub-styles of fugue were analyzed. That large set of weighted factors yielded meta data well-suited to meta analysis. I was still a long way from that goal, but I happily abandoned my original thesis when the program starting yielding themes and variations that sounded like real music. But time was running out to “complete” the project before the presentation deadline, and I was nowhere near the original stated goal. Still, there were some pretty interesting things happening, as I had built up enough information to process fairly consistent theme and variation combinations. I wanted to get another round of feedback before I made the final presentation, so I returned to the computer science lab for a fresh appraisal of my work.

The professor had less to say during the second walk-through, and after a few minutes with the app in debug mode he called some of his students over to watch the application recycle the first results. As we proceeded, someone in the group used a term I had only encountered in passing – neural net. The instructor referred to it as a “machine learning application” which was the first I had ever encountered that term. Honestly, I was still unsure about my results, but everyone around me seemed to be pretty excited about it from a computer science context. Later that day I met with Bob, where he and the computer science professor were already discussing my project. Bob turned and posited, “so I hear you’ve built an AI engine” to which I replied “have I?” with a silly grin on my face. We talked about how it had progressed between major revisions, Bob’s pivotal recommendation, and how I hadn’t really closed the gap between my original concept and the applications ability to create a new piece of music.

Aside from that, I had concerns about whether I was creating a “one trick pony”. I wanted the application to properly express the Schillinger system, which is a general model – like a periodic table of elements for music. If I created something that analyzed and (eventually) generated music for only for one genre, then the question would remain whether I had really established the validity of Schillinger’s system. The computer scientist answered with his signature deadpan, “It’s called an over-trained model, and in your case that would be a great problem to have.” Bob echoed, “That’s the kind of problem that doctoral theses are made of.” So, even with what I considered a “partially” complete project the final presentation went well. Faculty from both the music and computer science departments seemed pleased with what had been shown. I suppose they knew from the outset how presumptuous I had been when initially outlining the project, but that too seems to be a common “problem” in this kind of research. After graduation I set the project aside and hadn’t thought about it much since that time, as “normal” life – including taking a full time job working for Bob.

And as I write this, it strikes me that many of those residual lessons are familiar to the more recent “big data” projects I’ve undertaken:

  1. Starting with pre-conceived notions about results often leads to wasted effort
  2. The vast majority of project time was spent parsing and structuring input and interim data
  3. “Wrong” and “right” answers can weigh equally on confidence in the final result, and
  4. Well understood dead-ends are more valuable than accidental successes

I have considered re-approaching this project, but I’ll save my thoughts on that for another time, and perhaps another forum.

Data Science in N-of-1 Health & Fitness Tracking

therapyedgelogoIn the early aughts I worked on the TherapyEdge®¹ application suite. By that time in my career I had already held several positions in development and project management. Having just completed a lead data analysis and compliance position at IBM, I was hired as the Manager of Verification and Validation at TherapyEdge – my first formal role in the quality assurance domain. The application was revolutionary in that it could provide an HIV patient’s medication regime recommendation in a few minutes of processing. Previously, an HIV specialist would take days or weeks to factor in all of the variables with test results and come up with an individual management program. One key to this system was a newly approved method for comparing genotype to phenotype testing – and deriving a “virtual phenotype” model that allowed a regimen to be calculated from the less time-consuming and less expensive genotype test. The other breakthrough was the system’s ability to assimilate and analyze large numbers of patient records to find the most fitting care recommendation for a given patient. The ultimate goal was to create a predictive construct that would help clinicians avoid medication resistance, conserve treatment options and by extension improve individual patient outcomes. This was my first encounter with “N-of-1” clinical trials and related fields of medical care and treatment. And from a technical view it was also a new perspective – this was big data aimed at making a big difference in people’s lives.

My role was both technical and regulatory in nature, and from that I had reporting lines to the head of software and the general counsel of the company. From those overlapping responsibilities I was tasked to lead a matrixed team of medical professionals and software (both development and test) engineers in creating a portion of a decision support system now called TherapyEval. The system ingested large catalogs of medication data curated by the team – both ARV and non-ARV formulations as well as over-the-counter drugs. We also created lists of food interaction and allergy/resistance information which could factor into a patient’s course of care. The technical portion of the system involved determining a correct and consistent scoring model for the type and level of interactions between medications that could be prescribed.

This meant looking at the combination and dosage of active ingredients, which was on a level of detail “below” the prescription itself. On the application side, the resulting calculations were presented as prescription KPIs (low, medium, and high indicators) to note patient risk for a given potential interaction. Along with that base functionality, there was a heavy reliance on medical reference and citation data that provided a justification for the resulting risk score. Also, the reference/citation data was presented through the application for clinician reference/review. Our group’s job was to analyze all of that incoming data, scrutinize each entry for correct data and citation information, and then produce an approved “canonical” record that could be ingested by the main decision engine. As new commercial and generic medications were introduced to the market, or new information was available on existing products and their active ingredients, the group would evaluate changes to the system and ensure that all regulatory and compliance obligations had been met as the changes were integrated into the system. As each version was completed, changes were noted in a management system that was then prepared for submission to governing regulatory agencies. In more recent years the governing bodies have built their own knowledge-bases that health care providers can access. Public APIs such as the one available from the National Institutes of Health supply researched and cited data that obviate the need for individual companies to compile this information. And non-profit initiatives like the HIV Response Database Initiative have taken on the burden of cross-compiling data and establishing a common model for predictive analytics that any care-giver can use. Even though I’m several years past that work, it’s still gratifying to see that similar systems are now part of established common practice. Wrangling those data sets was pains-taking work that required meticulous planning, team coordination and exacting delivery. But it was also among the most satisfying roles of my career.

Squat_blue_earpiece
Bragi Dash wireless earbuds with motion sensors and pulse oximeter

Recently I have become re-acquainted with the term “N-of-1” – but not in the clinical sense. The term has been borrowed from that field and applied to personal health and fitness, and from that it has come into greater popular awareness. After a knee procedure in 2012 (I was struck by a car while driving my motorcycle) there was several months of rehab that ended up being a major turning point for my overall health. I needed to become stronger, particularly in the lower body. But I also really needed to become lighter. A big part of protecting my knees (I later developed a plica in my ‘good’ knee) was to simply take weight off of the joints. I had to find a balanced approach to simultaneously be more active while also taking off and keeping off the excess weight. Of course, the idea is to have an overall better health profile, so building and maintaining strength (both muscular and cardiovascular) was part of the balancing act. There’s certainly a way to reduce weight by simple calorie restriction, but that can have a negative effect on metabolism, and this can also impact sport/workout performance. Due to my own lack of education in this area, I used that approach for a while, and it didn’t work out so well. So already the N-of-1 methodology was in use, even if the result was undesirable. By the same token, there’s a way to measure workout intensity, but simply going full-tilt in the gym could risk re-injuring my knees (or some other part of my body) and has been found to be counter-productive to other parts of general health. Again, this is a place where I had some negative experience that required correction. So the goal now is to find that “Goldilocks zone” – balancing nutrition with training and recovery cycles to improve athletic performance while also enhancing the way I look and feel. That’s a pretty tall order, and not one that can be filled by simply “shooting from the hip” and hoping for the best. There would be many measure/analyze/correct cycles to make this work.

Like many other people I began measuring and monitoring what I ate by using a mobile app on my phone. I also tried various fitness trackers to gather daily activity and workout data. While it’s relatively straight-forward getting accurate nutritional tracking results from using an app like MyFitnessPal (as long as one is honest and consistent), the same cannot be said for fitness trackers. I’ll save the deep dive on this for the “health” portion of my site (coming soon), but anyone who has some familiarity with this area will know that the accuracy of those devices varies widely, and that’s perhaps being a bit too kind about it. One new device that caught my eye is the Bragi Dash wireless headphones. Not only do they have their own built-in music player, and water-proof housing for use while swimming – but it also has various sensors for activity tracking and monitoring. In most cases this is in the form of a heart rate monitor, but the Dash goes a step further by using a pulse oximeter. This not only can gather heart rate data, but also the level of oxygen saturation in the blood during exertion. I signed on to support their Kickstarter project as a developer, and am currently working with their initial API to gather and analyze that data. The app is simply entitled “Bragi RQ” and the intent is to create an inferential model of respiratory quotient monitoring based on oximetry (from the Dash) as a sole metric. This was inspired by the work of Dr. Peter Attia and his blog – Eating Academy. I will get into the details of the application on its portfolio page, but I will also be blogging about the development process – both on the software and hardware side of the equation.

h3_swimmingAlong with small-scale gathering and analysis of more accurate training data, I’m also looking at some of the big picture issues in overall personal health/activity monitoring. I’ve been surprised at the lack of sites/companies that help users analyze their nutrition, physical/body metrics and activity data as a whole. Perhaps Google Fit and Apple Health will approach that territory, as they would certainly be among the companies that would have access. But if either framework has those types of ambitions, I’ve seen no outward sign of them. They are collection points – and both have lots of X-Y plots of single data points over time – but they’re not doing anything ‘intelligent’ across sets of data. As much as general awareness is growing on the connections between nutrition and active lifestyle for overall health, there seems to be no real avenue for people to see those correlations in their personal data. In the spirit of N-of-1 clinical trials, I’m going to use myself as a first test subject. While I adopted wpDataTables and specifically their HighCharts implementation for this site, I will also be using that framework to visualize my personal fitness data. This will include pulling the information from the various sites that contain my data using their private and partner APIs – myFitnessPal, Withings, Polar, Bragi and others. From that I’ll create a consolidated semantic layer, and the follow-on charting and other visualizations will be displayed in the up-coming “health” portion of this site.

¹ TherapyEdge is a registered trademark of ABL, SA [return ↑]

Witnessing The Dark Side of Big Data

While I’ve spent most of the space here describing positive experiences, not everything has been so rosy. And if I’m to be fair about what I’ve learned along the way, I should also include some of that here. I worked in the Artificial Intelligence Group at Countrywide Home Loans from 2004 to 2007. If you’re aware of recent financial history you’ll know why those years “matter” more than others. The peak of the refinance boom was late in 2003, when tax law and changes to regulations governing financial institutions created a (fool’s) gold rush in the real estate market. And to bookend my my time there, the financial crisis began to show outward signs in July of 2007. I use the phrase “outward signs” deliberately, as there were plenty of signs within Countrywide that all was not well in the world of finance.

I managed three groups of analysts that tested the main underwriting systems that provided more than 99% of mortgages funded by Countrywide. Much of the work surrounded changes to guidelines that allowed more loans to be underwritten. This was not something that CHL hid from its shareholders, regulating bodies or its employees. In many ways those regulations were undercut in fairly innocuous ways – much like the “boiled frog” analogy that’s often used. But those changes were within the boundaries set by Fannie Mae, Freddie Mac and the Fed so no one was the wiser. It was when I saw a new loan category which attempted to codify the “Friends of Angelo” loans that my eyebrows went up – and I wasn’t alone. All of these loans would have otherwise failed underwriting checks, or – would have been priced at a substantially different rate than their approval under these “guidelines”. When I (and a few others) balked at this change, executives reassured us that all of the governing bodies had signed off on it. So the work proceeded, and the result is now well known and in the public domain. But the main reason why I left Countrywide in early 2007 was what I saw behind the scenes in “servicing”. Usually underwriting groups don’t see what’s in the servicing database. This is system where loans have been bundled into mortgage backed securities – old business, and not very interesting. This is actually where Countrywide made most of its money. It had loans that other companies had underwritten – not just CHL loans. However, each loan that’s bought by the wholesale group also has to pass the same checks that CHL loans must go through – and those systems are part of my portfolio. Because of that, our group had a specific interest in keeping fresh batches of loan data for testing changes to the compliance and other underwriting systems under purview. One of the wrinkles of the system was that any loan with a credit report older than 90 days would automatically kick out with a “refer” decision. (Refer is the CHL colloquialism for “reject” but was seen as a more palpable term.) So we were in the habit of contacting the secondary marketing group which managed that database and get a swath of loans that recently went into servicing – and therefore still had known-good credit reports that would pass through the compliance engine without tripping over the 90-day rule check.

When we received a sample data set, the record count was often in the millions. This was considered “a thin slice of the pie”, since the full database contained 1 of 6 loans serviced in the US, the largest in the industry. I mention this because when we started getting feeds of data that had almost no prime loans in it (conforming loans was thought to constitute the bulk of loans that CHL bought), we thought the Secondary Marketing Group had either mis-attributed their data pull or was playing some kind of joke. To the contrary, we were assured by that group that the data we received was a general survey of what had recently gone into servicing. If there were no new prime loans there, then there would have been no A tranches in the bundled mortgage backed securities in which they were delivered. This, too is now matter of public record. And from my team’s perspective it got worse than that. When we started to process the “accepted” sub-prime loans through the compliance engine, most of them received a “refer” decision. We scratched our collective heads and assumed that it was because of changes to guidelines between the time they were underwritten/bought and the time that they went into servicing. So as an experiment, I set up a server with a version of rules that would have been used against a batch of loans at the time they were purchased by Countrywide. Again they all came back with a “refer” decision. I noticed that all of the loans came from one division within Countrywide, and began to suspect that certain servers used by that division were configured with a very old version of the rules engine that was known by the group to allow more loans to receive an “accept” decision. When I reported this back to the Secondary Marketing Group I was told that executives were looking into it, but someone that previously worked in CHL’s fraud division took me aside and assured me that nothing was going to come of it. At that point I started laying the groundwork to depart Countrywide, and as they say – the rest is history.

I later took a position at Western Asset Management Company, one of the largest fixed-income asset managers in the world. By looking at their industry position and the reputation of the company, I thought I had found a company with the proper restraint to avoid the problems I found at Countrywide. While the issues at WAMCO weren’t as broad, there were similar points of failure where “intelligent” systems originally designed to flag improper human behavior were ignored or bypassed by staff. It was 2008, while the credit crisis was still looming over the markets, it still hadn’t overflowed onto “main street”. I was managing QA for all four working groups at the company – front office, back office, web applications and reporting. “Reporting” centered on performance – the main mechanism by which company executives viewed trade activity, and secondarily, the system by which traders tracked their bonuses. I spotted a particular line-item for a trading desk having to do with a cross-trade that looked unusual. I assumed that counter-party or trade type was mis-labeled, and went to one of my staff in charge of testing the settlement system. Everything we found there was consistent with the line item in the report, which is both good news and bad news. This is something that would normally cause the settlements system to throw a warning or error, as a cross trade struck at an above-bid price seemed to be a pretty clear violation of compliance rules. I took my findings to the managers of the development groups – assuming that it was a technical mistake that would be quickly corrected. Instead I was told to “go back to QA” and was later informed that I would have no role in reviewing the reporting system. As it happened I was also tasked with improving the overall process of testing and release management across those same working groups, and from that point forward any suggestions I offered were summarily stonewalled. I suspected that I had upset the wrong people at a small company – a company with a “flat structure” on paper, but a tightly guarded (and unspoken) hierarchy that was manipulated to the advantage of long-time employees there. I was the new guy, and was making the wrong kind of waves. After I realized that there was no executive “appetite” to correct the issues there, I left the company. And like Countrywide, the result of Western Asset’s actions during the credit crunch of 2008 is now a matter of public record.

With the release of Weapons of Math Destruction, it will become a form of pop culture sport to malign algorithmic systems. But my experience tells me that it’s not the computer software that’s at fault, but the avarice of the managers who willfully ignore or intentionally alter those systems to suit their own agenda. That’s the most important lesson I’ve learned from this episode of my career – algorithms don’t lie, it’s the lying liars and the lies they tell that are at issue. Whether results come from a computer or a person flipping beads on an abacus, either can be willfully misconstrued to serve someone’s greed.

To blame big data is to blame the messenger.

A Seasoned Manager In The Vanguard of Technology

MCPDTaking full view of my professional experiences, I remain optimistic and even excited about the opportunities that lie ahead. Both the technological advancements and increase in the general awareness of data science have made it easier to champion better patterns and practices in this field. Along with my professional work in the Azure/Cortana suite of tools (and similar forays in the Amazon’s Elastic Map/Reduce), I am also pursuing a Microsoft Professional Developer certification in Data Science. Much of that coursework overlaps directly with the abstracted examples I provide in this site. So if you haven’t already, please take a moment to review the samples in this portfolio. Aside from the usual code snippets and resulting visualizations, I also delve into the business cases and drivers behind the data – because at the end of the day, that’s what it’s all about. Thanks again for taking the time to review my site, and particularly for checking out this “between the lines” portion of my professional history. If you have any questions or comments, feel free to use the contact form linked at the top of the page.

keyboard_arrow_up^