About Me

About_Me
My name is Philip Docena. I used to work in development finance and investments, while studying artificial intelligence and machine learning in my spare time. In 2017, I formally left development finance and investments, and is now heavily preparing for a Computer Science graduate degree.

If all goes to plan, I'll enter Georgia Tech's OMSCS program Fall 2017, specializing in either Machine Learning or Computational Perception and Robotics. Either way one puts it, they both sound nerdy, with Machine Learning just a little bit more mainstream now. The grand goal is not just to find useful patterns in data, but to let 'machines' do the searching, and do so without much data fudging by humans.

For years, I was curious about one thing: how machines could 'see' and recognize objects (i.e., computer vision). That seemed quite complicated, so I wondered about the far simpler task of recognizing handwritten alphanumeric characters (26 uppercase, 26 lowercase, 10 digits). That was still too much, so I settled on recognizing just handwritten digits.

Consistently identifying individual numbers used to be hard given the randomness of handwriting styles (see examples here). But the field has advanced so much that this is now what amounts to a "Hello World!" program. Practically anyone can achieve 95%-99% accuracy with just a few lines of code. Not kidding, see this link).

I came to realize that techniques that work for digit recognition also apply well to general pattern recognition and extracting patterns from data. (It could also be seen the other way: general pattern recognition techniques can be used to address digit recognition).

It turns out that even if object and pattern recognition is reduced to this representation, this remains a very hard problem to get to a level that can match human proficiency in all scenarios (a particular version of this digit recognition problem is now considered ‘solved’). It also turns out that this field of pattern recognition is extremely deep and vast. It continues to take a lot of my time to understand well --that is, to understand it well enough to take it apart and put it back together.


Where I started


I worked with computers before. I completed a B.S. in Computer Engineering (Mapua Institute of Technology, Philippines), consulted for Accenture (Manila), joined an IBM software lab (Singapore), and went to business school (National University of Singapore) to focus on corporate strategy and consulting. Finance, banking and investments were secondary interests.

I then found myself very lucky to join the International Finance Corporation (Washington, DC) via a competitive program for newly minted MBAs and finance/economics graduate students. For over a decade, I made several private equity and venture capital (PE/VC) fund investments and became quite familiar with the commercially risky but developmentally critical PE/VC investment work in emerging markets....

... AI is a completely different world.


Where I am now


I have followed recent trends and huge strides in ML and AI. They are amazing developments. I thought I would try to understand these concepts and some of their cool tools. I did know a little bit from reading over the years. Lessons from my engineering undergrad (numerical methods and analysis, linear algebra, probability and statistics, calculus, algorithms and data structures) and MBA (statistics and operations research) prepared me somewhat to attempt to read hard-core technical papers in AI and ML. Somewhat.

It was rough. Humbling even. I have read with no difficulty a few technical papers before, and many papers by business academics as part of my MBA, sometimes without deep knowledge of background material and prior art. In contrast, reading quality ML papers and books is hard. Understanding them demands PhD-level familiarity of difficult concepts --or more correctly, difficult notation of otherwise simple concepts-- from many different fields. The key has been a mixture of patience, and a lot of reading (that is, repetitively reading the same topics from different books).

Reading ML papers and book chapters has become a little bit easier as my fundamentals built up. Still, there are many papers that I have no hope of understanding, stumped by impenetrable math too many times. I have at least two AI/ML intro textbooks on my shelf that I can barely understand past chapter one! To put it mildly, I have never felt so dumb in my life. Theoretical and abstract CS math is... kinda hard. :)

But I love this challenge too much to give up. And I'm okay with small steps. :) Some days I get lucky. I get to make small progress after learning something related elsewhere that puts things into less murky light. Baby steps for sure, but progress nonetheless.

And if I put enough baby steps in a row....