My interview experience with Google – Part 1

One month after submitting a job application at Google, I was contacted out of blue by a recruiter at Google in Sydney. You know what they say – at Google, someone human is guaranteed to look at your application (even though they receive more than 5000 a day)!

Gone are the days when one cannot land a Google interview without a top CS school degree on their resume. After all, I’m a Masters student (in Computer Science) from a no-name university in India. Your alma mater may help you land the interview. But how you proceed further in the Google recruitment pipeline depends entirely on how you perform in the interviews.

After rescheduling the recruiter screen a couple of times, I finally did the screening round.

Recruiter Screen:

The recruiter was very prompt; my phone rang at the scheduled time. The interview began with a short introduction to the role, and I was then asked to rate myself 1-10 in areas like Algorithms & Data Structures, Distributed Systems, Networking, Linux, C, C++, Java and so on.

Next came the horde of technical questions. This is just the recruiter testing the waters; checking whether you are indeed “Google” material; whether it’s even worth interviewing you. I would recommend you to get comfortable with the Big O of various algorithms (at the very least some sorting algorithms) and data structures. Having some knowledge of Linux internals shall also be helpful.

Make sure that you have a good phone connection. My call dropped once, but the interviewer called back within a minute. The call quality was also terrible, but we managed it somehow. Since there is no coding involved, you can use the phone as such; but it’s recommended to use a speaker & microphone set so that your hands shall be free.

Once my assigned 30 minutes were up, we concluded the interview. Within a couple of minutes, I received an email from the recruiter stating that he would like to schedule a phone interview with an engineer! Yay! I got a phone screen!

Phone Screen 1:

I was asked to give my preferred time slots for the interview, which the recruiter then pushed into some interview scheduling queue. Within an hour, I was contacted by a Recruiter Coordinator (they seems to be the one who schedules the interview), to confirm the interview scheduled for me, which I did confirm. The interview was supposed to be done through Google Hangouts (in case that fails, they shall do it through phone) and there was no coding involved in this round.

I was waiting in Hangouts, and at the scheduled time came the interviewer. He started off with a brief introduction, and then quickly turned to technical questions. I was asked questions about Linux, TCP/IP, Bit manipulation, C/C++ and about some of the projects listed on my resume.

The whole interview took less than 45 minutes, and in the remaining time I was encouraged to ask any questions. I could gather from the way we concluded the interview that I had passed. I got the email from the recruiter next morning confirming the same, and we got to scheduling the next phone screen.

Phone Screen 2:

This interview was a coding round through Hangouts, and a Google Docs was shared with me in advance for writing the code.

The interviewer was again prompt and started off with a brief introduction. The interviewer then asked questions on Linux, Bit Manipulation, and C/C++ and 1 coding question. I did struggle a bit with the coding question since it wasn’t a typical algorithmic question asked to new grad SWE candidates, but rather a real world problem. To top it off, we also had some technical faults during the call.

I received the email from recruiter next morning regarding the technical faults during the interview, but they were ready to schedule me one more phone screen.

Phone Screen 3:

This interview was a coding round through Hangouts, and a Google Docs was shared with me in advance for writing the code. The interviewer was late by about 3 minutes (I still wonder how these guys are prompt to the actual minute!), and the interview again started off with a brief introduction.

There were 2 coding questions, and I coded them up in C++. For one with some competitive programming background and a couple of weeks of interview preparation, it should be a breeze. The interviewer also mentioned that my resume was impressive (who doesn’t love a praise!), and asked about some of my projects. We completed the interview in about 30 minutes, and for the remaining time, I questioned him regarding some of the projects he had worked on at Google.

Just like the first phone screen, I knew I had impressed the interviewer. Got the email from recruiter next morning inviting me for onsite! A chance to finally try out the famed Google free food! 🙂

 

Advertisements

Google Photos

There are already plenty of services which allows you to store images in the cloud, like DropBox, Amazon, Apple, and Google Drive. But they are all capped at certain storage limits for free users. Maybe 5GB, 10GB or even 15GB. But with Google Photos, you get unlimited free cloud storage. If you are someone who favors images less than 16MP, then you won’t have to spend a dime on additional storage, even if you have tens of thousands of images in your photo library. Now, if your images are greater than 16MP, you can store them at full resolution in Google Drive, which is limited to 15GB, or reduces the resolution to 16MP or go paid.

If there is one thing you would expect Google to get it right, its search. And it certainly did it with Photos too. Rather than searching by filename or any other metadata, you can just search based on the actual contents of the image. You don’t have to spend time tagging or adding descriptions. Google Photos cuts through all that with automatic item identification and categorization. Given an image, it can understand what the image is. You can search based on colors, items, locations and even emotions with surprising accuracy.

When you take a photo, if GPS is turned ON, location data is embedded in the image. All current apps detect location by reading this data. But Google goes a step further. It can detect the location from the actual image itself. It can recognize over 250,000 locations around the world. In case no location data is present in the image and Google can’t detect the location, you can always add it yourselves. Just edit the image, find the location on the map, and save.

Photos also come with a cool feature called Assistant. It creates interesting things from the images that you have uploaded. If you have multiple images from the same location, it may create extensive Panorama images. Or if you have images that come in sequence (like the one at the bottom), it shall detect the sequence, and creates animations and videos out of them. It may also apply conventional Instagram-style filters to your images. Also, if you upload copies of the same image, it shall detect it, and shall store only a single copy.

Another feature of Google photos is its smart image editor. When you edit an image, it shall take the people in the picture into account. So if you want to apply the vignette effect, rather than just darkening the edges of the image, it shall apply the effect in such a way that the effect is applied to the people in the images, and not the center of the image.

Like with any other online service, you’re trusting a tech company with your files and giving it more information about you.
So if you’re concerned with privacy, then this service is not for you.
But if you’re not, then you’d be hard-pressed to find a better solution than Google’s Photos.

What does the big data enterprise market look like in 2016? Is this a winner-take-all market where we will see certain companies dominate…

Various reports have pegged Big Data market to be worth around $40 billion in 2016. [1] [2]. Big Data have clearly three leaders – Cloudera, Horton Works and MapR, of which only Horton Works had its IPO. Its stock is trending at around $10 a share.

Horton Works had revenues worth $46 million reported in 2014 [3]. Looking at the steady increase in their revenues over the years, they might touch $100 million in revenues in the year 2016. Hadoop is open source, meaning that companies can use it for free. So how does Horton Works makes money? One word, support. Hortons Works have more than 800 customers at the moment, providing 24/7 web and telephonic support [4].

Cloudera, being the first to lead in the big data race, has the advantage of a beginner, meaning more customers. Most companies might be reluctant to shift to one of their competitors. Cloudera have also raised about $1 billion, with a big chunk coming from INTEL. Cloudera had claimed more than $100 million in revenues in 2014 [5], way ahead of Horton Works, and expected to reach $300 million in 2016. Cloudera revenue model is same as Horton Works, meaning selling support.

MapR is quite different from the above two. They are dedicated to creating proprietary extensions to Hadoop while maintaining the API compatibility, but at the same time, provide extra products and capabilities that compliment Hadoop ecosystem to work better. The strength of MapR is in its propritary products like MapR FS, MapR DB and MapR Streams [6]. MapR FS is a POSIX filesystem that supports distributed, reliable, high performance, scalable and fully read/write filesystem. The Hadoop filesystem HDFS, is nowhere close to MapR FS, and is one of the main reasons, why customers prefer MapR. In 2014, MapR had about 700 paying customers [7]. MapR M5 had a price tag of $4000 per node per year [8], which means they might be making a lot more than Horton Works or Cloudera.

Even if you take the combined market capitalization of all the above companies, its nowhere close to the entire big data market. They are plenty of other players like Syncsort (expected $75 million in big data revenues in 2013), MarkLogic (expected $96 million in big data revenues in 2013), OperaSolutions (expected $124 million in 2013), Actian (expected $138 million in big data revenues in 2013), Pivotol (expected $300 million in big data revenues in 2013), PWC (expected $312 million in big data revenues in 2013), Accenture (expected $415 million in big data revenues in 2013), Palantir (expected $418 million in big data revenues in 2013), SAS (expected $480 million in big data revenues in 2013), Oracle (expected $491 million in big data revenues in 2013), Teradata (expected $518 million in big data revenues in 2013), SAP (expected $545 million in big data revenues in 2013), Dell (expected $652 million in big data revenues in 2013), HP (expected $869 million in big data revenues in 2013), IBM (expected $1.37 billion in big data revenues in 2013) [9] [10].

More and more startups are turning up in big data like DataHero (raised $6.1 million in Series A funding), Tamr, Domo (valued at $2 billion), Arcadia Data, Looker, Kyvos Insights, Confluent (raised $24 million in Series B funding), AtScale and ThoughtSpot (raised $30 million in Series B funding).

The year 2016 might see more companies providing proprietary or open solutions to complement the big data or even the Hadoop ecosystem. The open source nature of Hadoop may make it difficult to earn revenues, but there’s absolutelty no barrier for a new company or startup to enter into the big data race. Big data is definitely going to see a proliferation of players and technologies in 2016!

Big O – How to calculate it?

Say that you need to perform a task, and that there are 2 algorithms, say A and B for performing that task. Let A finishes the task in time TA and B finishes it in time TB. If we can show that TA is less than TB, then algorithm A is indeed faster than algorithm B.

We test both the algorithms for an input of size n0, and we see that TA(n0) < TB(n0). That means that algorithm A is indeed faster than algorithm B for input size n0. But we won’t be using the same input size n0 all the time. We need to be able to use different input sizes depending on the application. But if we can show that TA(n) < TB(n) for all values of n > 0, we can prove that algorithm A is indeed faster than algorithm B, irrespective of the input size. But we have no idea on how big or small n is, and a function is not always less than or equal to another function throughout the entire set of input sizes. To solve that, we have the asymptotic analysis.

Asymptotic analysis removes all the dependencies like hardware, OS, compiler and various other factors, and gives us a relation based purely on input size n. There are various classes of asymptotic, like the big O, theta, big Omega, little o, and little omega. Each of these can be used to calculate a relation between the running time of an algorithm (for time complexity) with its input size n. All other factors are ignored.

Big notation is used for creating an upper bound for an algorithm. It means that whatever happens, the algorithm shall not take more time than what’s shown by the big O notation. So big O notation are used mainly for worst case analysis. We shall strip off low order terms and constants to generate a relation purely on the input size n.

Say that you need to search for a telephone number from a listing of telephone numbers, What shall you do? You shall start at the top and goes down to the bottom, by comparing each telephone number with the number that you need to search for. This is what call a sequential search or linear search. If you add 1 more telephone number to the listing, you shall need to search for 1 more number only (assuming that the number you need to find has not be found). So if you add n telephone numbers, you just need to search atmost n times to find the number. So we call it linear time complexity or O(n).

Suppose you need to search for a word in a dictionary, say “Good“. Will you start at A in the dictionary and then do a linear search for every single word in the dictionary, until you find the word? No, you wont. Because linear search is time consuming in this occasion. So what shall we do? We shall try to get to the middle of the dictionary. The word we need to find shall be in either of the halves. We shall go to the either of the halves and keep dividing the dictionary, until we find our word. Since we halve the dictionary at each step, the number of steps get divided by 2 in each step. That means that its a logarithmic complexity or O(log n). In asymptotic analysis, we do not think about the base of the logarithms. The reason is because to convert from a logarithm of one base to another, we just need to multiply by some constant, and since we ignore constants in asymptotic analysis, they have no importance.

Suppose you need to find the element in an array at the index 5 (So you need to find the value of a[5]). You do not have to search through a[0] to a[5] to determine the value at a[5]. A simple lookup at the a[5] shall get you the answer. Whatever the value of the element at a[5], you just need a simple lookup. It means that the time for the operation does not depend on the size, and rather needs only a constant time. This is what we call constant complexity or O(1).

When we say that the running times of 2 algorithms are asymptotically the same, it doesn’t mean that their running times are the same. For example 100n lg n and 2n lg n are asymptotically same (because the constants are ignored), though their values are not.

How do I get an internship at Google?

Among all the internships that Google offers, Software Engineering internship is the favorite and have more number of openings. Given below is the list of qualifications that Google seeks in their prospective interns,

  • Experience in systems software or algorithms
  • Excellent implementation skills (C++, Java, Python)
  • Knowledge of Unix/Linux or Windows environment and APIs
  • Familiarity with TCP/IP and network programming

These are their preferred qualifications, and you need to make sure that you do indeed qualify. Have a look at Students – Google Careers.

To get the internship, the first step should be making sure that you do get an interview. More than 50k apply for Google every summer, and only about 25k get the first interview. So you need to make sure that you get that first interview. Its not easy as it sounds. When you submit your application, you need three things: resume, cover letter and transcript.

Google needs transcript because they do look at how you have done in your CS  subjects. If you are not a CS student, make sure that you do some CS courses, be it in your university or through Coursera, Edx or similar sites. Its better to have good scores in these subjects. Low scorers doesn’t mean they don’t have the required knowledge. Its just that high scorers simply had put in the effort and hence probably are hard workers.

In the cover letter, write why you are suitable for the internship. Have a look at the qualifications, and make your cover letter accordingly. Say for example, they need people proficient with TCP/IP and networking, make a networking app like a simple HTTP server, a port scanner or something similar.

In the resume, you need to have something worth looking at. At Google, candidate selection is done through various stages. Only if you pass all those stages shall you get the first interview. The best way is through employee referrals. If you know someone who works at Google and can refer you, your chances of getting that interview is very much improved.

Make sure you do some personal projects. Contribute to open source. Mastering the basics of Python is an easy task. Learn Tkinter or some other GUI based libraries and make some GUI apps. Host it on Github. Continue making more projects.

Once you get the first interview, then all that matters is how you perform in the interview. You need to be thorough with data structures and algorithms. Study from CLRS and other algorithms book. Implement them in your language of choice. Start competitive programming.

MapR Technologies interview experience

MapR Technologies is one of the leaders in Apache Hadoop, along with Cloudera and Horton Works, based in San Jose, USA. MapR allows you to do more with Hadoop by combining it with various architectural innovations. MapR platform not only provides enterprise-grade features such as high availability, disaster recovery, security, and full data protection but also allows Hadoop to be easily accessed as traditional network attached storage (NAS) with read-write capabilities.

It was quite a surprise for me to hear that MapR was coming to my college to recruit interns, They needed 1 or 2 interns to work in their Hyderabad office. And the most interesting thing was they had never hired interns in India before, which means if selected, he/she shall become the first intern in their Indian office!

Unfortunately the day they came was the day I was going home. I had already booked my tickets. But they allowed me to be interviewed first, so that I won’t be late. The first interview started off with a nice introduction about me. Then came the first question – Merge two sorted linked lists. Pretty easy, and I did it in O(n) and accounted for all the edge cases. Interviewer was satisifed and we moved on to the next question, the traditional producer consumer problem. Though I did get it right, the program could have been optimized using condition variables. The next question was a system design question relating to databases and file systems, which I was not able to answer satisfiably owing to my lack of experience in scaling. But I was given a take-home test, which I was supposed to complete within a couple of hours.

My next interviewer drilled me in various data structures like stack, queues, trees, hash tables in relation with system design. (Its been a year since I was interviewed, and owing to the fact that I was asked too much questions, I’m unable to recall most of them). In both the interviews, the emphasis was on data structures and strong programming skills in C in relation with system design. Once the interviews were over, I went back to my hostel. I completed my take-home test and mailed it to them. On my way home, I got the call. Yes! I had been selected into MapR Technologies!

Get that interview at Google

Google receives about 3 million applications a year. Thats roughly about 12000 applications every business day. A simple LinkedIn search reveals that Google has some 1200 recruiters all over the globe, which means every Google recruiter handles on average 10 applications every business day. That’s some pretty wide hole for any applicant to get noticed by a recruiter, right?

Wrong. A recruiter has much more work than to look through applications 8 hours a day. They need to scour through LinkedIn, StackOverflow, Github and other job boards in search for prospective candidates. They to need evaluate skill set, experience and education and reach out to prospective candidates for a friendly chat. They need to manage the whole recruitment process, making sure that the interviews are conducted efficiently and professionaly. They need to write reports for job openings, hires and post hire summaries for hiring managers. They need to mentor and provide guidance to recruting coordinators. They need to perform reference checks, salary recommendations, salary negotiation, handle offer acceptance/decline situations and offer generation. Which means they have a hell lot of work, and can be picky on who to call for interviews. Afterall, its 10 times more difficult getting into Google than Harvard.

There are plenty of ways by which you can land an interview:

  • Employee referrals – This has the highest chance of success, assuming that the employee actually knows you and had worked under/with you. Otherwise, it might not carry much weight, and yet might still land you the interview.
  • Applying on career site – Though its pretty easy to submit an online application in Google, chances are you shall be waiting forever for them to contact you.
  • Get noticed by a recruiter – A good LinkedIn, Github, StackOverflow profile or even a technical blog can get the attention of a recruiter and land you your interviews.

Resume

Your first point of entry is a resume. A recruiter on average spends about 6 seconds glancing through your resume. Unless your resume satisfy certain characteristics, you wont even pass the resume screen.

  • Stick to a 1 page resume format.
  • Focus on accomplishments and not responsibilities.
  • Have some open source contributions and pet projects of your own. Create a mobile or web app.
  • Stop making a CV and start writing a resume. Recruiters have no interest in knowing your birthday or marital status!
  • Do not write in paragraphs.
  • Stop writing Objective section. It brings nothing new to table.

Education

Plenty of people say that where you did your education won’t matter at Google. Its true to some extent, as the engineers interviewing you won’t care about your alma mater. But your recruiter will. To them, your college is yet another paramater in a search query to reduce the number of prospective candidates. A recruiter always try to reduce the probability of a bad hire getting in, even though it might reject good candidates too.

But once you landed your first job, your alma mater wont matter much. Only your experience and your company do. So even if you don’t get into Google on your first try, try getting into some good startup and gain some experience. Then apply to Google!