Google introduces more helpful Machine Learning-based services
At Google I/O 19, the Mountain View company is looking to become more helpful by introducing more Machine Learning-based services.
Our mission to make information universally accessible and useful hasn’t changed over the past 21 years, but our approach has evolved over time. Google is no longer a company that just helps you find answers. Today, Google products also help you get stuff done, whether it’s finding the right words with Smart Compose in Gmail, or the fastest way home with Maps. Simply put, our vision is to build a more helpful Google for everyone, no matter who you are, where you live, or what you’re hoping to accomplish. When we say helpful, we mean giving you the tools to increase your knowledge, success, health, and happiness.
Google CEO Sundar Pichai
Google Assistant
Google developed completely new speech recognition and language understanding models, bringing 100GB of models in the cloud down to less than half a gigabyte. With these new models, the AI that powers the Assistant can now run locally on a phone. This breakthrough enabled Google to create a next generation Assistant that processes speech on-device at nearly zero latency with transcription that happens in real-time even when you have no network connection.
Running on-device, the next generation Assistant can process and understand your requests as you make them and deliver the answers up to 10 times faster. You can multitask across apps, making the task of creating a calendar invite, finding and sharing a photo with your friends, or dictating an email faster than ever before. With Continued Conversation, you can make several requests in a row without having to say “Hey Google” each time.
Google is also expanding Duplex beyond voice to help you get things done on the web. To start, booking rental cars and movie tickets will be the two tasks supported. Using “Duplex on the Web,” the Assistant will automatically enter information, navigate a booking flow and complete a purchase on your behalf. With massive advances in deep learning, it’s now possible to bring much more accurate speech and natural language understanding to mobile devices, enabling the Google Assistant to work faster for you.
Google Lens
New features in Google Search and Google Lens use the camera, computer vision and augmented reality (AR) to provide visual answers to visual questions. Now, AR is available directly on Search. If you’re searching for sharks online, you can see a great white up close from different angles and even see how it would look like on your lawn.
You can also use Google Lens to get more information about what you’re seeing in the real world. So if you’re at a restaurant and point your camera at the menu, Google Lens will highlight which dishes are popular and show you pictures and reviews from people who have been there before.
In GoogleGo, a search app for first-time smartphone users, Google Lens will read out loud the words you see, helping the millions of adults around the world who struggle to read everyday things like street signs or ATM instructions.
Project Euphonia
Several products were also introduced with new tools and accessibility features, including Live Caption, which can caption a conversation in a video, a podcast or one that’s happening in your home. In the future, Live Relay and Euphonia will help people who have trouble communicating verbally, whether because of a speech disorder or hearing loss.
We have seen how Machine Learning are changing the world for the better. With continued support from the developer community, it will be a matter of time before it becomes a meaningful tool that enriches our lives.
Leave a Reply