Books, Miscellaneous

A New Year, a New Beginning…

The first few days of a new year feels like a break. An opportunity to pause, think, recharge and realign to life’s long-term destinations. In the race that’s called “life”, it’s good to have a lull, a moment of silence, a moment of reflection in the continuum of hours and minutes and seconds that rush through. Speaking from experience, knowing our flaws always helps, as long as we recognize them; and to recognize them, we just need two ingredients: a moment of reflection, and an open mind.

This past year has been very rewarding for me, both in terms of personal and professional enrichment. I have been really fortunate to know some people directly or indirectly whose clear thinking, mentorship and sometimes sheer genius deeply affected my thought process and outlook. Such were some books, which I am extremely glad I got my hands on. Here they are:

Books

  1. Blockchain by Mark Gates — I read this book mainly because I was feeling left out on the cryptocurrency buzz. I really wanted to know, in a very simple and non-technical way what it all means and why is it so popular, and if it is worth its salt. This book was excellent. It has a very lucid dialect. Every chapter has a summary which outlines the whole chapter so one can skip a chapter or two if it goes into too much detail. Overall, an excellent guide if you are new to blockchain technology.
  2. Clean Architecture, The Clean Coder, and Clean Code by Robert C Martin – I had heard about these books a lot, but never got a chance to go through them all. And now that I have read them, all I can say is — these are books that every programmer should read once a year. Just like a good literature has many layers of understanding that unfolds every time you read them, these books reveal new insights and open new doors every time you pick them up!
  3. Functional Swift by Chris Eidhof, Florian Kugler, and Wouter Swierstra, Advanced Swift by Chris Eidhof, Ole Begemann, and Airspeed Velocity, Core Data by Florian Kugler and Daniel Eggert – All of them are the most practical and in-depth Swift and Core Data related books I have ever got the chance to read. Amazing examples and sample code. The best thing about these books is — they are full of best practices and an in-depth explanation of the same. Anyone who wants to know those topics in depth will be deeply enriched by them.
  4. A Mind of Numbers by Barbara Oakley, Ph.D. – This book is about learning. Lots of insights on the inner workings of the human brain, how it processes information and how it learns. They are full of wonderful examples from eminent people. If our brain is what is a sword to a warrior, this book teaches how to be the master of the sword, rather than being its slave.
  5. Leonardo the Vinci – Notebooks — Interesting one don’t you think? I have always been intrigued by this Renaissance polymath. If you are wondering what I learned or even understood by reading those optically transformed backward scribbles of the legendary genius, let me confess – hardly anything. But one important lesson I learned was the importance of documenting one’s work. That gave me the idea of making a Developer’s Journal. This is not a new concept, smart developers have been documenting their work ever since the dawn of time. I have two journals, in one I note down all the new learnings (I post them on this blog sometimes if they are big enough and interesting enough), but the other one is more interesting. I write down all the problems I encounter there – and how I fixed it. You wouldn’t believe how much insight it provides on one’s learning curve. It also becomes a very interesting read, once it is long enough to have forgotten how did I fix that pesky problem that kept me up for two nights!

At the very end, I’ll share with you a piece of thought. Nothing is permanent. But nothing is transient either. Think about an excavation of historical importance and obscure wall carvings. Who created them, who wrote them? A person just like you and me (maybe by order from some higher authority) who lived thousands of years ago. Little did he know, thousands of years later, the world would know about his work, no matter how insignificant it seemed then. Similarly, what we do now, echoes in eternity. The work you do today would probably be uncovered thousands of years from now. Wouldn’t you want them to be intrigued by your craftsmanship? Let’s keep that in mind and let’s amaze the future humankind.

With that last thought, I bid you goodbye for now and wish you have a wonderful new year and may this new beginning renews and rejuvenates you in your journey of attainment of your true potential. Bon voyage!

Miscellaneous

Machine Learning with CoreML

Rise of CoreML

We iOS Developers are a lucky bunch – apart from the usual holidays in December, we enjoy a special Christmas every June – thanks to the World Wide Developers Conference organized by Apple. 2017 was no exception either. So when Apple unwrapped the boxes for us – out came the new HomePod Speaker, the new beast called iMac Pro, Mac OSX High Sierra – everything was awesome! Then there were the toys for developers. I probably have been a very nice guy all year because it was a pleasant surprise for me when Apple revealed their new CoreML – machine learning integration framework – because out of professional curiosity I have been dabbling with Machine Learning for the past few months. So having the opportunity to implement the power of machine learning in iOS – I could not wait to get my hands wet! Here’s an outline of what I learned:

What is Machine Learning

Before we jump right down the cliff, let’s discuss a little about that what’s beneath.

You see, when a human child puts her first step at the doorway of learning, she can not learn by herself. Instead, she needs her hand carefully held by the teacher, and with intensive guidance, she is steered along the path of acquiring knowledge. As she learns, she also gains experience.

Experience.

The trusted friend who would, one day, take the job off her teacher’s careful hands and become her lifelong guide and companion – growing together as she passes through the oft-perilous ways of life.  And exactly there, dear reader, a machine have differed from a human being thus far. A machine could be taught, but it could not teach itself – until – machine learning evolved. Machine Learning provides the experience factor to the intelligence system of a machine which is also known as Artificial Intelligence. It’s the science of getting computers to act without being explicitly programmed.

A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E.
 – Mitchell, T. (1997). Machine Learning. McGraw-Hill. p. 2. ISBN 0-07-042807-7.

Types of Learning

Based on algorithms used for training a machine to gain experience, Machine Learning can be grouped into two major categories – Supervised Learning and Unsupervised Learning. Supervised Learning is where a machine is trained with a complete set of labeled training data and outcomes. On the other hand, unsupervised learning is where the machine can not be trained with labeled training data. Using supervised learning a machine can either solve a classification or a regression problem. On the other hand, a machine can solve a clustering and some other types of problems using unsupervised learning. Following are some examples of the problems:

  • Classification: The machine is given a dataset and based of specific parameters it classifies them into different categories. For example: Based on the size and shape of a tumor, the machine can classify a tumor to be malignant or benign.
  • Regression: Based on various parameters, like product price, demand, distribution and other factors, based on historical data, a machine can predict the profit for the current or future years.
  • Clustering: The best example of clustering should probably be the Google News. It uses an algorithm to group news of same topic and content and show them together. Pattern recognition plays a key part in clustering solutions.

Once such an algorithm is generated, a model can be generated which enables a machine to refer to it to make further predictions, inferences and deductions. Machine Learning tools can generate such a model, but once generated they can not be used in iOS apps as is. They need to be converted to Xcode supported .mlmodel format.

CoreML

Apple provides the link to a few open source CoreML models that solve some classification problems like detecting the major object(s) in a picture or detecting a scene from a picture.

However, apart from these, any machine learning model generated by any machine learning tool can be converted into a CoreML model using CoreML Tools – that can be used in the app.

Core ML lets you integrate a broad variety of machine learning model types into your app. In addition to supporting extensive deep learning with over 30 layer types, it also supports standard models such as tree ensembles, SVMs, and generalized linear models. Because it’s built on top of low level technologies like Metal and Accelerate, Core ML seamlessly takes advantage of the CPU and GPU to provide maximum performance and efficiency. You can run machine learning models on the device so data doesn’t need to leave the device to be analyzed.

Using the CoreML model, and Vision framework, it’s really easy to build an iOS app that – given a photo – can detect scenes or major objects from that and display. I won’t go into the irrelevant details of building this app from scratch but rather would discuss the heart of the application – the fun part – and it’s just a few steps.

The Photo Recognition App

I will assume that the app is setup to provide an image, by either picking a photo from the native photo picker or by taking a photo with the camera.

Step 1. Now the first step would be to download a machine learning model from Apple’s website and include it in the app. Here I am using Inceptionv3 model listed in Apple’s website. This seems to be a very good model with much better accuracy than the others – although a bit heavy in size. Now Xcode does some heavy lifting for you. As soon as the model is added, Xcode generates a model class named after the model name. To see it, just highlight the model in Xcode files navigator:

Screen Shot 2017-07-11 at 11.26.25 AM

In the next steps we would refer to this class as Inceptionv3.

Step 2. Now it’s time for some code. Import Vision and CoreML frameworks which will aid us in our journey. Then implement the following code.

Here we create an instance of the VNCoreMLModel class for the CoreML model Inceptionv3. It’s sometimes recommended to initialize this early so that it’s faster once the image is selected for recognition.

Step 3. Now we need to create a VNCoreMLRequest which would query the MLModel with our image to find out what it is.

Here, first we create a VNCoreMLRequest and specify a completion block once it finishes execution. The completion block just takes the first result from the prediction set received as an array of VNClassificationObservation class. As I discussed before, classification is one type of observation. There are other types of observations like clustering, regression. Notice that, VNClassificationObservation is a subclass of VNObservation.

The VNCoreMLRequest uses a VNCoreMLModel that is based on a CoreML based model to run predictions with that model. Depending on the model the returned observation is either a VNClassificationObservation for classifier models, VNPixelBufferObservations for image-to-image models or VNMLFeatureValueObservation for everything else.

Step 4. We are almost there. The last and final step is to actually executing the request. A job well suited for the one and only VNImageRequestHandler.

All the code listed above can be included in one method and once it is executed, the answerLabel prints the name of the major object on the picture along with the accuracy.

Img

A note on accuracy

From the above screenshot, it might appear that the world of machine learning and prediction is all rainbows and unicorns like this, but in reality it’s far from that. Machine Learning is still in it’s infancy and has much room for improvement. As for the iOS app, it all depends on the model used, and its very easy to miss the optimal sweet spot and instead under-train or overtrain the model. In case of overtraining, the models start focussing on the quirks of the training set more and hence it’s accuracy gets diminished.

Conclusion

Using a CoreML model and Vision framework to leverage machine learning to create perception about the outside world opens up endless possibilities. Once the machine recognizes an object, it’s probably the next obvious step to respond to it. In iOS 11, ARKit provides Augmented Reality – one of many options to do something with this new super power the iPhones have got. I intend to touch up on that in my next post. Meanwhile, have fun and learn how to train your machine!!

All copyrights of images belong to their respective owners.

Miscellaneous

Multiple Inheritance in Objective C

 

Multiple Inheritance in Objective C is not supported. The reason for not supporting this mechanism might be the fact that it would have been too difficult to include in the language or the authors thought it is a bad programming and design decision. However, in various cases multiple inheritance proves to be helpful. Fortunately objective C does provide some workarounds for achieving multiple inheritance. Following are the options:

Option 1: Message Forwarding

Message Forwarding, as the name suggests, is a mechanism offered by Objective C runtime. When a message is passed to an object and the object does not respond to it, the application crashes. But before crashing the objective c runtime provides a second chance for the program to pass the message to the proper object/class which actually responds to it. After tracing for the message till the top most superclass, the forwardInvocation message is called. By overriding this method, one can actually redirect the message to another class.

Example: If there is a class named Car which has a property named carInfo which provides the car’s make, model and year of manufacture, and the carInfo contains the data in NSString format, it would be very helpful if NSString class methods could be called upon the objects of Car class which actually inherits from NSObject.

– (id)forwardingTargetForSelector:(SEL)sel

{

    if ([self.carInfo respondsToSelector:sel]) return self.carInfo;

    return nil;

}

Option 2: Composition

Composition is a cocoa design pattern which involves referencing another object and calling its functionalities whenever required. Composition actually is a technique for a view to build itself based on several other views. So, in Cocoa terminology this is very similar to Subclassing.

@interface ClassA : NSObject {

}

-(void)methodA;

@end

@interface ClassB : NSObject {

}

-(void)methodB;

@end

@interface MyClass : NSObject {

  ClassA *a;

  ClassB *b;

}

-(id)initWithA:(ClassA *)anA b:(ClassB *)aB;

-(void)methodA;

-(void)methodB;

@end

Option 3: Protocols

Protocols are classes which contains method to be implemented by other classes who implement the protocol. One class can implement as many as protocols and can implement the methods. However, with protocols only methods can be inherited and not the instance variables.