Thursday 26 March 2015

Game Performance: Layout Qualifiers

Today, we want to share some best practices on using the OpenGL Shading Language (GLSL) that can optimize the performance of your game and simplify your workflow. Specifically, Layout qualifiers make your code more deterministic and increase performance by reducing your work.



Let’s start with a simple vertex shader and change it as we go along.



This basic vertex shader takes position and texture coordinates, transforms the position and outputs the data to the fragment shader:

attribute vec4 vertexPosition;
attribute vec2 vertexUV;

uniform mat4 matWorldViewProjection;

varying vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}

Vertex Attribute Index


To draw a mesh on to the screen, you need to create a vertex buffer and fill it with vertex data, including positions and texture coordinates for this example.



In our sample shader, the vertex data may be laid out like this:

struct Vertex
{
Vector4 Position;
Vector2 TexCoords;
};

Therefore, we defined our vertex shader attributes like this:

attribute vec4 vertexPosition;
attribute vec2 vertexUV;

To associate the vertex data with the shader attributes, a call to glGetAttribLocation will get the handle of the named attribute. The attribute format is then detailed with a call to glVertexAttribPointer.

GLint handleVertexPos = glGetAttribLocation( myShaderProgram, "vertexPosition" );
glVertexAttribPointer( handleVertexPos, 4, GL_FLOAT, GL_FALSE, 0, 0 );

GLint handleVertexUV = glGetAttribLocation( myShaderProgram, "vertexUV" );
glVertexAttribPointer( handleVertexUV, 2, GL_FLOAT, GL_FALSE, 0, 0 );

But you may have multiple shaders with the vertexPosition attribute and calling glGetAttribLocation for every shader is a waste of performance which increases the loading time of your game.



Using layout qualifiers you can change your vertex shader attributes declaration like this:

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

To do so you also need to tell the shader compiler that your shader is aimed at GL ES version 3.1. This is done by adding a version declaration:

#version 300 es

Let’s see how this affects our shader, changes are marked in bold:

#version 300 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;


uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}

Note that we also changed outTexCoord from varying to out. The varying keyword is deprecated from version 300 es and requires changing for the shader to work.



Note that Vertex Attribute qualifiers and #version 300 es are supported from OpenGL ES 3.0. The desktop equivalent is supported on OpenGL 3.3 and using #version 330.



Now you know your position attributes always at 0 and your texture coordinates will be at 1 and you can now bind your shader format without using glGetAttribLocation:

const int ATTRIB_POS = 0;
const int ATTRIB_UV = 1;

glVertexAttribPointer( ATTRIB_POS, 4, GL_FLOAT, GL_FALSE, 0, 0 );
glVertexAttribPointer( ATTRIB_UV, 2, GL_FLOAT, GL_FALSE, 0, 0 );

This simple change leads to a cleaner pipeline, simpler code and saved performance during loading time.


To learn more about performance on Android, check out the Android Performance Patterns series.



Posted by Shanee Nishry, Games Developer Advocate

Wednesday 25 March 2015

Developing audio apps for Android Auto

Posted by Joshua Gordon, Developer Advocate



Have you ever wanted to develop apps for the car, but found the variety of OEMs and proprietary platforms too big of a hurdle? Now with Android Auto, you can target a single platform supported by vehicles coming soon from 28 manufacturers.



Using familiar Android APIs, you can easily add a great in-car user experience to your existing audio apps, with just a small amount of code. If you’re new to developing for Auto, watch this DevByte for an overview of the APIs, and check out the training docs for an end-to-end tutorial.






Playback and custom controls
















Custom playback controls on NPR One and iHeartRadio.



The first thing to understand about developing audio apps on Auto is that you don’t draw your user interface directly. Instead, the framework has two well-defined UIs (one for playback, one for browsing) that are created automatically. This ensures consistent behavior across audio apps for drivers, and frees you from dealing with any car specific functionalities or layouts. Although the layout is predefined, you can customize it with artwork, color themes, and custom controls.



Both NPR One and iHeartRadio customize their UI. NPR One adds controls to mark a story as interesting, to view a list of upcoming stories, and to skip to the next story. iHeartRadio adds controls to favorite stations and to like songs. Both apps store user preferences across form factors.



Because the UI is drawn by the framework, playback commands need to be relayed to your app. This is accomplished with the MediaSession callback, which has methods like onPlay() and onPause(). All car specific functionality is handled behind the scenes. For example, you don’t need to be aware if a command came from the touch screen, the steering wheel buttons, or the user’s voice.



Browsing and recommendations

















Browsing content on NPR One and iHeartRadio.



The browsing UI is likewise drawn by the framework. You implement the MediaBrowserService to share your content hierarchy with the framework. A content hierarchy is a collection of MediaItems that are either playable (e.g., a song, audio book, or radio station) or browsable (e.g., a favorites folder). Together, these form a tree used to display a browsable menu of your content.



With both apps, recommendations are key. NPR One recommends a short list of in-depth stories that can be selected from the browsing menu. These improve over time based on user feedback. iHeartRadio’s browsing menu lets you pick from favorites and recommended stations, and their “For You” feature gives recommendations based on user location. The app also provides the ability create custom stations, from the browsing menu. Doing so is efficient and requires only three taps (“Create Station” -> “Rock” -> “Foo Fighters”).



When developing for the car, it’s important to quickly connect users with content to minimize distractions while driving. It’s important to note that design considerations on Android Auto are different than on a mobile device. If you imagine a typical media player on a phone, you may picture a browsable menus of “all tracks” or “all artists”. These are not ideal in the car, where the primary focus should be on the road. Both NPR One and iHeartRadio provide good examples of this, because they avoid deep menu hierarchies and lengthy browsable lists.



Voice actions for hands free operation



Voice actions (e.g., “Play KQED”) are an important part of Android Auto. You can support voice actions in your app by implementing onPlayFromSearch() in the MediaSession.Callback. Voice actions may also be used to start your app from the home screen (e.g., “Play KQED on iHeartRadio”). To enable this functionality, declare the MEDIA_PLAY_FROM_SEARCH intent filter in your manifest. For an example, see this sample app.



Next steps



NPR One and iHeartRadio are just two examples of great apps for Android Auto today. They feel like a part of the car, and look and sound great. You can extend your apps to the car today, too, and developing for Auto is easy. The framework handles the car specific functionalities for you, so you’re free to focus on making your app special. Join the discussion at http://g.co/androidautodev if you have questions or ideas to share. To get started on your app, visit developer.android.com/auto.



Thursday 19 March 2015

Hello Places API for Android and iOS!



Posted by Jen Kovnats Harrington, Product Manager, Google Maps APIs



Originally posted to Google Geo Developers blog



People don’t think of their location in terms of coordinates on a map. They want context on what shops or restaurants they’re at, and what’s around them. To help your apps speak your users’ language, we’re launching the Places API for Android, as well as opening a beta program for the Places API for iOS.



The Places API web service and JavaScript library have been available for some time. By providing native support for Android and iOS devices, you can optimize the mobile experience with the new APIs by taking advantage of the device’s location signals.



The Places APIs for Android and iOS bridge the gap between simple geographic locations expressed as latitude and longitude, and how people associate location with a known place. For example, you wouldn’t tell someone you were born at 25.7918359,-80.2127959. You’d simply say, “I was born in Jackson Memorial Hospital in Miami, Florida.” The Places API brings the power of Google’s global places database into your app, providing more than 100 million places, like restaurants, local businesses, hotels, museums, and other attractions.





Key features include:



  • Add a place picker: a drop-in UI widget that allows your users to specify a place

  • Get the place where the user is right now

  • Show detailed place information, including the place’s name, address, phone number, and website

  • Use autocomplete to save your users time and frustration typing out place names, by automatically completing them as they type

  • Make your app stand out by adding new places that are relevant to your users and seeing the places appear in Google's Places database

  • Improve the map around you by reporting the presence of a device at a particular place.


To get started with the Places API for Android, watch this DevByte, check out the developer documentation, and play with the demos. To apply for the Places API for iOS beta program, go here.



Take your apps on the road with Android Auto

Posted by Wayne Piekarski, Developer Advocate



Starting today, anyone can take their apps for a drive with Android Auto using Android 5.0+ devices, connected to compatible cars and aftermarket head units. Android Auto lets you easily extend your apps to the car in an efficient way for drivers, allowing them to stay connected while still keeping their hands on the wheel and their eyes on the road. When users connect their phone to a compatible vehicle, they will see an Android experience optimized for the head unit display that seamlessly integrates voice input, touch screen controls, and steering wheel buttons. Moreover, Android Auto provides consistent UX guidelines to ensure that developers are able to create great experiences across many diverse manufacturers and vehicle models, with a single application available on Google Play.



With the availability of the Pioneer AVIC-8100NEX, AVIC-7100NEX, and AVH-4100NEX aftermarket systems in the US, the AVIC-F77DAB, AVIC-F70DAB, AVH-X8700BT in the UK, and in Australia the AVIC-F70DAB, AVH-X8750BT, it is now possible to add Android Auto to many cars already on the road. As a developer, you now have a way to test your apps in a realistic environment. These are just the first Android Auto devices to launch, and vehicles from major auto manufacturers with integrated Android Auto support are coming soon.



With the increasing adoption of Android Auto by manufacturers, your users are going to be expecting more support of their apps in the car, so now is a good time to get started with development. If you are new to Android Auto, check out our DevByte video, which explains more about how this works, along with some live demos.





The SDK for Android Auto was made available to developers a few months ago, and now Google Play is ready to accept your application updates. Your existing apps can take advantage of all these cool new Android Auto features with just a few small changes. You’ll need to add Android Auto support to your application, and then agree to the Android Auto terms in the Pricing & Distribution category in the Google Play Developer Console. Once the application is approved, it will be made available as an update to your users, and shown in the cars’ display.



Adding support for Android Auto is easy. We have created an extensive set of documentation to help you add support for messaging (sample), and audio playback (sample). There are also short introduction DevByte videos for messaging and audio as well. Stay tuned for a series of posts coming up soon discussing more details of these APIs and how to work with them. We also have simulators to help you test your applications right at your desk during development.



With the launch of Android Auto, a new set of possibilities are available for you to make even more amazing experiences for your users, providing them the right information for the road ahead. Come join the discussion about Android Auto on Google+ at http://g.co/androidautodev where you can share ideas and ask questions with other developers.



Wednesday 18 March 2015

Android Developer Story: Outfit7 — Building an entertainment company with Google

Posted by Leticia Lago, Google Play team



Outfit7, creators of My Talking Tom and My Talking Angela, recently announced they’ve achieved 2.5 billion app downloads across their portfolio. The company now offers a complete entertainment experience to users spanning mobile apps, user generated and original YouTube content, and a range of toys, clothing, and accessories. They even have a silver screen project underway.



We caught up with Iza Login, Rok Zorko and Marko Å tamcar - some of the co-founders- in Ljubljana, Slovenia, to learn best practices that helped them in reaching this milestone.





To learn about some of the Google and Google Play features used by Outfit7 to create their successful business, check out these resources:



  • Monetization — explore the options available for generating revenue from your apps and games.

  • Monetization with AdMob — learn how you can maximize your ad revenue.

  • YouTube for Developers — Whether you’re building a business on YouTube or want to enhance your app with video, a rich set of YouTube APIs can bring your products to life.


Tuesday 17 March 2015

Creating Better User Experiences on Google Play

Posted by Eunice Kim, Product Manager for Google Play



Whether it's a way to track workouts, chart the nighttime stars, or build a new reality and battle for world domination, Google Play gives developers a platform to create engaging apps and games and build successful businesses. Key to that mission is offering users a positive experience while searching for apps and games on Google Play. Today we have two updates to improve the experience for both developers and users.



A global content rating system based on industry standards


Today we’re introducing a new age-based rating system for apps and games on Google Play. We know that people in different countries have different ideas about what content is appropriate for kids, teens and adults, so today’s announcement will help developers better label their apps for the right audience. Consistent with industry best practices, this change will give developers an easy way to communicate familiar and locally relevant content ratings to their users and help improve app discovery and engagement by letting people choose content that is right for them.



Starting now, developers can complete a content rating questionnaire for each of their apps and games to receive objective content ratings. Google Play’s new rating system includes official ratings from the International Age Rating Coalition (IARC) and its participating bodies, including the Entertainment Software Rating Board (ESRB), Pan-European Game Information (PEGI), Australian Classification Board, Unterhaltungssoftware Selbstkontrolle (USK) and Classificação Indicativa (ClassInd). Territories not covered by a specific ratings authority will display an age-based, generic rating. The process is quick, automated and free to developers. In the coming weeks, consumers worldwide will begin to see these new ratings in their local markets.



To help maintain your apps’ availability on Google Play, sign in to the Developer Console and complete the new rating questionnaire for each of your apps. Apps without a completed rating questionnaire will be marked as “Unrated” and may be blocked in certain territories or for specific users. Starting in May, all new apps and updates to existing apps will require a completed questionnaire before they can be published on Google Play.





An app review process that better protects users


Several months ago, we began reviewing apps before they are published on Google Play to better protect the community and improve the app catalog. This new process involves a team of experts who are responsible for identifying violations of our developer policies earlier in the app lifecycle. We value the rapid innovation and iteration that is unique to Google Play, and will continue to help developers get their products to market within a matter of hours after submission, rather than days or weeks. In fact, there has been no noticeable change for developers during the rollout.



To assist in this effort and provide more transparency to developers, we’ve also rolled out improvements to the way we handle publishing status. Developers now have more insight into why apps are rejected or suspended, and they can easily fix and resubmit their apps for minor policy violations.



Over the past year, we’ve paid more than $7 billion to developers and are excited to see the ecosystem grow and innovate. We’ll continue to build tools and services that foster this growth and help the developer community build successful businesses.



Friday 13 March 2015

Haystack TV Doubles Engagement with Android TV

Posted by Joshua Gordon, Developer Advocate



Haystack TV is a small six person startup with an ambitious goal: personalize the news. Traditionally, watching news on TV means viewing a list of stories curated by the network. Wouldn’t it be better if you could watch a personalized news channel, based on interesting YouTube stories?



Haystack already had a mobile app, but entering the living room space seemed daunting. Although “Smart TVs” have been on the market for a while, they remain challenging for developers to work with. Many hardware OEMs have proprietary platforms, but Android TV is different. It’s an open ecosystem with great developer resources. Developers can reach millions of users with familiar Android APIs. If you have an existing Android app, it’s easy to bring it to the living room.



Two weeks was all it took for Haystack TV to bring their mobile app to Android TV. That includes building an immersive, cinematic UI (a task greatly simplified by the Android framework). Since launching on Android TV, Haystack TV’s viewership is growing at 40% per month. Previously, users were spending about 40 minutes watching content on mobile per week. Now that’s up to 80 minutes in the living room. Their longest engagements are through Chromecast and Android TV.





Hear from Daniel Barreto, CEO of Haystack TV, on developing for Android TV



Haystack TV’s success on Android TV is a great example of how the Android multi-form factor developer experience shines. Once you’ve learned the ropes of writing Android apps, developing for another form factor (Wear, Auto, TV) is simple.



Android TV helps you create cinematic UIs



Haystack TV’s UI is smooth and cinematic. How were they able to build a great one so quickly? Developing an immersive UI/UX with Android TV is surprisingly easy. The Leanback support library provides fragments for browsing content, showing a details screen, and search. You can use these to get transitions and animations almost for free. To learn more about building UIs for Android TV, watch the Using the Leanback Library DevByte and check out the code samples.



Browsing recommended stories



Your content, front and center



The recommendations row is a central feature of the Android TV home screen. It’s the first thing users see when they turn on their TVs. You can surface content to appear on the recommendations row by implementing the recommendation service. For example, your app can suggest videos your users will want to watch next (say, the next episode in a series, or a related news story). This is great for getting noticed and increasing engagements.





Make your content searchable



How can users find their favorite movie or show from a library of thousands? On Android TV, they can search for it using their voice. This is much faster and more relaxing than typing on the screen with a remote control! In addition to providing in-app search, your app can surface content to appear on the global search results page. The framework takes care of speech recognition for you and delivers the result to your app as a plain text string.



Next Steps



Android TV makes it possible for small startups to create apps for the living room. There are extensive developer resources. For an overview, watch the Introduction to Android TV DevByte. For details, see the developer training docs. Watch this episode of Coffee with a Googler to learn more about the vision for the platform. To get started on your app, visit developer.android.com/tv.



Thursday 12 March 2015

A new reference app for multi-device applications

It is now possible to bring the benefits of your app to your users wherever they happen to be, no matter what device they have near them. Today we’re releasing a reference sample that shows how to implement such a service with an app that works across multiple Android form-factors. This sample, the Universal Music Player, is a bare-bones but functional reference app that supports multiple devices and form factors in a single codebase. It is compatible with Android Auto, Android Wear, and Google Cast devices. Give it a try and easily adapt your own app for wherever your users are, be that a phone, watch, TV, car, or more!




Playback controls and album art in the lock screen.

On the application toolbar, the Google Cast icon.





Controlling playback through Android Auto





Controlling playback on an Android Wear watch


This sample uses a number of new features in Android 5.0 Lollipop, like MediaStyle notifications, MediaSession and MediaBrowserService. They make it easy to implement media browsing and playback on multiple devices with a single version of your app.


Check out the source code and let your users enjoy your app from wherever they like.


Posted by Renato Mangini, Senior Developer Platform Engineer, Google Developer Platform Team

Tuesday 10 March 2015

Android 5.1 Lollipop SDK




style="border-radius: 6px;padding:0;margin:0;" />



By Jamal Eason, Product Manager, Android




Yesterday we announced Android 5.1, an updated version of the Android Lollipop platform that improves stability, provides better control of notifications, and increases performance. As a part of the Lollipop update, we are releasing the Android 5.1 SDK (API Level 22) which supports the new platform and lets you get started with developing and testing.



What's new in Android 5.1?



For developers, Android 5.1 introduces a small set of new APIs. A key API addition is support for multiple SIM cards, which is important for many regions where Android One phones are being adopted. Consumers of Android One devices will have more flexibility to switch between carriers and manage their network activities in the way that works best for them. Therefore you, as a developer, can create new app experiences that take advantage of this new feature.





In addition to the new consumer features, Android 5.1 also enhances enterprise features to better support the launch of Android for Work.












Android 5.1 supports multiple SIM cards on compatible devices like Android One.






Updates for the Android SDK



To get you started with Android 5.1, we have updated the Android SDK tools to support the new platform and its new APIs. The SDK now includes Android 5.1 emulator system images that you can use to test your apps and develop using the latest capabilities and APIs. You can update your SDK through the Android SDK Manager in Android Studio.



For details on the new developer APIs, take a look at the API Overview.






Coming to Nexus devices soon



Over the next few weeks, we’ll be rolling out updates for Android 5.1 to the following Nexus devices: Nexus 4, Nexus 5, Nexus 6, Nexus 7 [2012], Nexus 7 [2012] (3G), Nexus 7 (2013), Nexus 7 [2013] (3G/LTE), Nexus 9, Nexus 9 (LTE), Nexus 10, and Nexus Player.



Next Steps



As with all Android releases, it’s a good idea to test your apps on the new platform as soon as possible. You can get started today using Android 5.1 system images with the emulator that’s included in the SDK, or you can download an Android 5.1 Nexus image and flash the system image to your Nexus device.



If you have not had a chance to update your app to material design, or explore how your app might work on Android Wear, Android TV, or even Android Auto, now is a good time to get started with the Android 5.1 SDK update.




Monday 2 March 2015

Google Play services 7.0 - Places Everyone!

Posted by Ian Lake, Developer Advocate



Today, we’re bringing you new tools to build better apps with the completion of the rollout of Google Play services 7.0. With this release, we’re delivering improvements to location settings experiences, a brand new API for place information, new fitness data, Google Play Games, and more.



Location Settings Dialog


While the FusedLocationProviderApi combines multiple sensors to give you the optimal location, the accuracy of the location your app receives still depends greatly on what settings are enabled on the device (e.g. GPS, wifi, airplane mode, etc). In Google Play services 7.0, we’re introducing a standard mechanism to check that the necessary location settings are enabled for a given LocationRequest to succeed. If there are possible improvements, you can display a one touch control for the user to change their settings without leaving your app.





This API provides a great opportunity to make for a much better user experience, particularly if location information is critical to the user experience of your app such as was the case with Google Maps when they integrated the Location Settings dialog and saw a dramatic increase in the number of users in a good location state.



Places API


Location can be so much more than a latitude and longitude: the new Places API makes it easy to get details from Google’s database of places and businesses. The built-in place picker makes it easy for the user to pick their current place and provides all the relevant place details including name, address, phone number, website, and more.





If you prefer to provide your own UI, the getCurrentPlace() API returns places directly around the user’s current location. Autocomplete predictions are also provided to allow a low latency search experience directly within your app.



You can also manually add places with the addPlace() API and report that the user is at a particular place, ensuring that even the most explorative users can input and share their favorite new places.






The Places API will also be available cross-platform: in a few days, you’ll be able to apply for the Places API for iOS beta program to ensure a great and consistent user experience across mobile platforms.



Google Fit


Google Fit makes building fitness apps easier with fitness specific APIs on retrieving sensor data like current location and speed, collecting and storing activity data in Google Fit’s open platform, and automatically aggregating that data into a single view of the user’s fitness data.



In Google Play services 7.0, the previous Fitness.API that you passed into your GoogleApiClient has now been replaced with a number of APIs, matching the high level set of Google Fit Android APIs:


  • SENSORS_API to access raw sensor data via SensorsApi

  • RECORDING_API to record data via RecordingApi

  • HISTORY_API for inserting, deleting, or reading data via HistoryApi

  • SESSIONS_API for managing sessions via SessionsApi

  • BLE_API to interact with Bluetooth Low Energy devices via BleApi

  • CONFIG_API to access custom data types and settings for Google Fit via ConfigApi



This change significantly reduces the memory requirement for Google Fit enabled apps running in the background. Like always, apps built on previous versions of Google Play services will continue to work, but we strongly suggest you rebuild your Google Fit enabled apps to take advantage of this change.



Having all the data can be an empowering part of making meaningful changes and Google Fit is augmenting their existing data types with the addition of body fat percentage and sleep data.





Google Play Games


Announced at Game Developers Conference (GDC), we’re offering new tools to supercharge your games on Google Play. Included in Google Play services 7.0 is the Nearby Connections API, allowing games to seamlessly connect smartphones and tablets as second-screen controls to the game running on your TV.





App Indexing


App Indexing lets Google index apps just like websites, enabling Google search results to deep-link directly into your native app. We've simplified the App Indexing API to make this integration even easier for you by combining the existing view()/viewEnd() and action()/end() flows into a single start() and end() API.



Changes to GoogleApiClient


GoogleApiClient serves as the common entry point for accessing Google APIs. For this release, we’ve made retrieval of Google OAuth 2.0 tokens part of GoogleApiClient, making it much easier to request server auth codes to access Google APIs.



SDK Now Available!


You can get started developing today by downloading the Google Play services SDK from the Android SDK Manager.



To learn more about Google Play services and the APIs available to you through it, visit the Google Services section on the Android Developer site.





New Tools to Supercharge Your Games on Google Play

Posted by Greg Hartrell, Senior Product Manager of Google Play Games



Everyone has a gaming-ready device in their pocket today. In fact, of the one billion Android users in more than 190 countries, three out of four of them are gamers. This allows game developers to reach a global audience and build a successful business. Over the past year, we paid out more than $7 billion to developers distributing apps and games on Google Play.



At our Developer Day during the Game Developers Conference (GDC) taking place this week, we announced a set of new features for Google Play Games and AdMob to power great gaming. Rolling out over the next few weeks, these launches can help you better measure and monetize your games.



Better measure and adapt to player needs



“Player Analytics has helped me hone in on BombSquad’s shortcomings, right the ship, and get to a point where I can financially justify making the games I want to make.”



Eric Froemling, BombSquad developer



Google Play Games is a set of services that help game developers reach and engage their audience. To further that effort, we’re introducing Player Analytics, giving developers access to powerful analytics reports to better measure overall business success and understand in-game player behavior. Launching in the next few weeks in the Google Play Developer Console, the new tool will give indie developers and big studios better insight into how their players are progressing, spending, and churning; access to critical metrics like ARPPU and sessions per user; and assistance setting daily revenue targets.



BombSquad, created by a one-person game studio in San Francisco, was able to more than double its revenue per user on Google Play after implementing design changes informed during beta testing Player Analytics.



Optimizing ads to earn the most revenue



After optimizing your game for performance, it’s important to build a smarter monetization experience tailored to each user. That’s why we’re announcing three important updates to the AdMob platform:



  • Native Ads: Currently available as a limited beta, participating game developers will be able to show ads in their app from Google advertisers, and then customize them so that users see ads that match the visual design of the game. Atari is looking to innovate on its games, like RollerCoaster Tycoon 4 Mobile, and more effectively engage users with this new feature.

  • In-App Purchase House Ads Beta: Game developers will be able to smartly grow their in-app purchase revenue for free. AdMob can now predict which users are more likely to spend on in-app purchases, and developers will be able to show these users customized text or display ads promoting items for sale. Currently in beta, this feature will be coming to all AdMob accounts in the next few weeks.

  • Audience Builder: A powerful tool that enables game developers to create lists of audiences based on how they use their game. They will be able to create customized experiences for users, and ultimately grow their app revenue.




"Atari creates great game experiences for our broad audience. We're happy to be partnering with Google and be the first games company to take part in the native ads beta and help monetize games in a way that enhances our users' experience."



Todd Shallbetter, Chief Operating Officer, Atari



New game experiences powered by Google


Last year, we launched Android TV as a way to bring Android into the living room, optimizing games for the big screen. The OEM ecosystem is growing with announced SmartTVs and micro-consoles from partners like Sony, TPVision/Philips and Razer.



To make gaming even more dynamic on Android TV, we’re launching the Nearby Connections API with the upcoming update of Google Play services. With this new protocol, games can seamlessly connect smartphones and tablets as second-screen controls to the game running on your TV. Beach Buggy Racing is a fun and competitive multiplayer racing game on Android TV that plans to use Nearby Connections in their summer release, and we are looking forward to more living room multiplayer games taking advantage of mobile devices as second screen controls.






At Google I/O last June, we also unveiled Google Cardboard with the goal of making virtual reality (VR) accessible to everyone. With Cardboard, we are giving game developers more opportunities to build unique and immersive experiences from nothing more than a piece of cardboard and your smartphone. The Cardboard SDKs for Android and Unity enable you to easily build VR apps or adapt your existing app for VR.



Check us out at GDC


Visit us at the Google booth #502 on the Expo floor to get hands on experience with Project Tango, Niantic Labs and Cardboard starting on Wednesday, March 4. Our teams from AdMob, AdWords, Analytics, Cloud Platform and Firebase will also be available to answer any of your product questions.



For more information on what we’re doing at GDC, please visit g.co/dev/gdc2015.



Followers