Google Announces New Machine Learning Center

Google has taken one more step towards turning the vision of artificial intelligence into a reality. The search engine giant announced the opening of its new machine learning research center.

Located in Zurich, Switzerland, Google's new machine learning center offers a hub where engineers and researchers can fine tune the company's machine learning technology. According to a recent post on the official Google Blog, the center will focus on three areas: machine learning, natural language processing & understanding, and machine perception.

The search engine giant chose Zurich because it's the home of Google's largest offshore engineering office, which is used to power the company's Knowledge Graph and conversation engine for Assistant in Allo.

Today, we’re excited to announce a dedicated Machine Learning research group in Europe, based in our Zurich office. Google Research, Europe, will foster an environment where software engineers and researchers specialising in ML will have the opportunity to develop products and conduct research right here in Europe, as part of the wider efforts at Google,” wrote Emmanuel Mogenet, head of Google Research, Europe.

Mogenet added that researchers at Google's new machine learning center will research ways to improve machine learning infrastructures, and that researchers here will work closely with linguists to advance the company's Natural Language Understanding. Therefore, it's safe to assume that the new machine learning center will collaborate with the London-based DeepMind group, which was acquired by Google back in 2014. DeepMind's AlphaGo artificial intelligence machine gained mainstream notariety earlier this year after it beat the world's top Go player.

Machine Learning: the Basics

So, what exactly is machine learning and why is Google interested in the technology? The term has roots dating back more than half a century, during which programmer and artificial intelligence pioneer Arthur Samuel described it as being the “Field of study that gives computers the ability to learn without being explicitly programmed.” That otherwise simple definition sums up artificial intelligence nicely.

The terms “machine learning” and “artificial intelligence” are often used interchangeably, as they both have a similar meaning. Artificial intelligence is intelligence exhibited by machines or computers. Machine learning is a subtext of artificial intelligence, describing the study in which the AI is created.

Machine learning has several different approaches, each of which has its own unique characteristics. Some of the most common approaches used in machine learning include the following:

  • Decision tree learning – described the use of a decision tree predictive model to make observations about an item; thus, drawing conclusions about the target value.

  • Association rule learning – method for discovering relationships between different variables in a database.

  • Artificial neural networks – also known as a neural network (NN), this learning algorithm was inspired by the structure of biological neural networks. Artificial neural networks is commonly used to identify patterns and relationships in data.

  • Deep Learning – consists of multiple hidden layers in an artificial neural network.

  • Inductive logic programming – this approach to rule learning uses logic programming to represent input values. Inductive programming is a type of programming that utilizes a similar approach,viewing languages for representing hypotheses instead of just logic programming.

  • Support vector machines – set of supervised learning methods that are commonly used for regression and classification.

  • Clustering – also known as clustering analysis, this is the assignment of observations into subsets known as clusters. When observations in the same cluster are made, it allows for improved categorizing and curating of data.

  • Bayesian networks – type of probabilistic graphical model representing random variables along with their respective conditional independencies in a directed acylic graph (DAG). Common Bayesian networks are used to represent the relationships between infectious diseases/illnesses and symptoms in the medical field.

  • Representation learning – used to identify the representations of input.

  • Similarity and metric learning – involves a learning machine that is given pairs of similar objects, at which point the machine predicts if the two are similar.

  • Sparse dictionary learning – represents datum as a linear function.

  • Genetic algorithms – type of machine learning search heuristic that reflects natural selection; involves the use of mutation and crossover methods to create net genotypes as a solution to a problem.

The Growing Trend of Machine Learning

Google isn't the only company tinkering with machine learning technology. According to a survey of IT executives conducted by 451 Research and Blazent, 67.3% of respondents said they either have machine learning and/or predictive analytics in place, or they are planning to add them in the near future. The survey also found that 66.7% of respondents said they are either currently using machine learning technology for recommender systems, or play to in the near future.

In 2006, the streaming TV and movie service Netflix held a competition to find a better, more effective algorithm at matching movie recommendations with its users. Netflix's Cinematch algorithm automatically recommends similar titles based on the user's viewing history. In an effort to improve the relevancy and overall quality of these recommendations, the streaming media provider held a competition to find a better algorithm. A consortium of researchers from AT&T Labs, BigChaor and Pragmatic Theory won the contest by developing a machine learning-algorithm.

Even Apple has jumped on the bandwagon. As explained in this MacWorld article, the latest version of Apple's mobile operating is being powered by machine learning technology. Available later this year, iOS 10 will use machine learning and facial recognition to automatically curate and group users' photos. So instead of manually tagging people in your photos, you can sit back and let Apple's new operating system do it for you.

When speaking at the annual Worldwide Developer Conference last month, an Apple spokesperson also unveiled a new machine learning and artificial intelligence-powered technology that will offer differential privacy to its users. According to CDT.org, differential privacy is a mathematical system that's used to collect personal data on users while changing subtle data points to protect the privacy of individual users. Google was the first to use differential privacy, deploying the technology in its Chrome analytics platform back in 2014. Apple's announcement makes it the second company to use differential privacy.

Thanks for reading and feel free to let us know your thoughts in the comments below regarding machine learning.