Thursday 26 March 2015

Game Performance: Layout Qualifiers

Today, we want to share some best practices on using the OpenGL Shading Language (GLSL) that can optimize the performance of your game and simplify your workflow. Specifically, Layout qualifiers make your code more deterministic and increase performance by reducing your work.



Let’s start with a simple vertex shader and change it as we go along.



This basic vertex shader takes position and texture coordinates, transforms the position and outputs the data to the fragment shader:

attribute vec4 vertexPosition;
attribute vec2 vertexUV;

uniform mat4 matWorldViewProjection;

varying vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}

Vertex Attribute Index


To draw a mesh on to the screen, you need to create a vertex buffer and fill it with vertex data, including positions and texture coordinates for this example.



In our sample shader, the vertex data may be laid out like this:

struct Vertex
{
Vector4 Position;
Vector2 TexCoords;
};

Therefore, we defined our vertex shader attributes like this:

attribute vec4 vertexPosition;
attribute vec2 vertexUV;

To associate the vertex data with the shader attributes, a call to glGetAttribLocation will get the handle of the named attribute. The attribute format is then detailed with a call to glVertexAttribPointer.

GLint handleVertexPos = glGetAttribLocation( myShaderProgram, "vertexPosition" );
glVertexAttribPointer( handleVertexPos, 4, GL_FLOAT, GL_FALSE, 0, 0 );

GLint handleVertexUV = glGetAttribLocation( myShaderProgram, "vertexUV" );
glVertexAttribPointer( handleVertexUV, 2, GL_FLOAT, GL_FALSE, 0, 0 );

But you may have multiple shaders with the vertexPosition attribute and calling glGetAttribLocation for every shader is a waste of performance which increases the loading time of your game.



Using layout qualifiers you can change your vertex shader attributes declaration like this:

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

To do so you also need to tell the shader compiler that your shader is aimed at GL ES version 3.1. This is done by adding a version declaration:

#version 300 es

Let’s see how this affects our shader, changes are marked in bold:

#version 300 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;


uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}

Note that we also changed outTexCoord from varying to out. The varying keyword is deprecated from version 300 es and requires changing for the shader to work.



Note that Vertex Attribute qualifiers and #version 300 es are supported from OpenGL ES 3.0. The desktop equivalent is supported on OpenGL 3.3 and using #version 330.



Now you know your position attributes always at 0 and your texture coordinates will be at 1 and you can now bind your shader format without using glGetAttribLocation:

const int ATTRIB_POS = 0;
const int ATTRIB_UV = 1;

glVertexAttribPointer( ATTRIB_POS, 4, GL_FLOAT, GL_FALSE, 0, 0 );
glVertexAttribPointer( ATTRIB_UV, 2, GL_FLOAT, GL_FALSE, 0, 0 );

This simple change leads to a cleaner pipeline, simpler code and saved performance during loading time.


To learn more about performance on Android, check out the Android Performance Patterns series.



Posted by Shanee Nishry, Games Developer Advocate

Wednesday 25 March 2015

Developing audio apps for Android Auto

Posted by Joshua Gordon, Developer Advocate



Have you ever wanted to develop apps for the car, but found the variety of OEMs and proprietary platforms too big of a hurdle? Now with Android Auto, you can target a single platform supported by vehicles coming soon from 28 manufacturers.



Using familiar Android APIs, you can easily add a great in-car user experience to your existing audio apps, with just a small amount of code. If you’re new to developing for Auto, watch this DevByte for an overview of the APIs, and check out the training docs for an end-to-end tutorial.






Playback and custom controls
















Custom playback controls on NPR One and iHeartRadio.



The first thing to understand about developing audio apps on Auto is that you don’t draw your user interface directly. Instead, the framework has two well-defined UIs (one for playback, one for browsing) that are created automatically. This ensures consistent behavior across audio apps for drivers, and frees you from dealing with any car specific functionalities or layouts. Although the layout is predefined, you can customize it with artwork, color themes, and custom controls.



Both NPR One and iHeartRadio customize their UI. NPR One adds controls to mark a story as interesting, to view a list of upcoming stories, and to skip to the next story. iHeartRadio adds controls to favorite stations and to like songs. Both apps store user preferences across form factors.



Because the UI is drawn by the framework, playback commands need to be relayed to your app. This is accomplished with the MediaSession callback, which has methods like onPlay() and onPause(). All car specific functionality is handled behind the scenes. For example, you don’t need to be aware if a command came from the touch screen, the steering wheel buttons, or the user’s voice.



Browsing and recommendations

















Browsing content on NPR One and iHeartRadio.



The browsing UI is likewise drawn by the framework. You implement the MediaBrowserService to share your content hierarchy with the framework. A content hierarchy is a collection of MediaItems that are either playable (e.g., a song, audio book, or radio station) or browsable (e.g., a favorites folder). Together, these form a tree used to display a browsable menu of your content.



With both apps, recommendations are key. NPR One recommends a short list of in-depth stories that can be selected from the browsing menu. These improve over time based on user feedback. iHeartRadio’s browsing menu lets you pick from favorites and recommended stations, and their “For You” feature gives recommendations based on user location. The app also provides the ability create custom stations, from the browsing menu. Doing so is efficient and requires only three taps (“Create Station” -> “Rock” -> “Foo Fighters”).



When developing for the car, it’s important to quickly connect users with content to minimize distractions while driving. It’s important to note that design considerations on Android Auto are different than on a mobile device. If you imagine a typical media player on a phone, you may picture a browsable menus of “all tracks” or “all artists”. These are not ideal in the car, where the primary focus should be on the road. Both NPR One and iHeartRadio provide good examples of this, because they avoid deep menu hierarchies and lengthy browsable lists.



Voice actions for hands free operation



Voice actions (e.g., “Play KQED”) are an important part of Android Auto. You can support voice actions in your app by implementing onPlayFromSearch() in the MediaSession.Callback. Voice actions may also be used to start your app from the home screen (e.g., “Play KQED on iHeartRadio”). To enable this functionality, declare the MEDIA_PLAY_FROM_SEARCH intent filter in your manifest. For an example, see this sample app.



Next steps



NPR One and iHeartRadio are just two examples of great apps for Android Auto today. They feel like a part of the car, and look and sound great. You can extend your apps to the car today, too, and developing for Auto is easy. The framework handles the car specific functionalities for you, so you’re free to focus on making your app special. Join the discussion at http://g.co/androidautodev if you have questions or ideas to share. To get started on your app, visit developer.android.com/auto.



Thursday 19 March 2015

Hello Places API for Android and iOS!



Posted by Jen Kovnats Harrington, Product Manager, Google Maps APIs



Originally posted to Google Geo Developers blog



People don’t think of their location in terms of coordinates on a map. They want context on what shops or restaurants they’re at, and what’s around them. To help your apps speak your users’ language, we’re launching the Places API for Android, as well as opening a beta program for the Places API for iOS.



The Places API web service and JavaScript library have been available for some time. By providing native support for Android and iOS devices, you can optimize the mobile experience with the new APIs by taking advantage of the device’s location signals.



The Places APIs for Android and iOS bridge the gap between simple geographic locations expressed as latitude and longitude, and how people associate location with a known place. For example, you wouldn’t tell someone you were born at 25.7918359,-80.2127959. You’d simply say, “I was born in Jackson Memorial Hospital in Miami, Florida.” The Places API brings the power of Google’s global places database into your app, providing more than 100 million places, like restaurants, local businesses, hotels, museums, and other attractions.





Key features include:



  • Add a place picker: a drop-in UI widget that allows your users to specify a place

  • Get the place where the user is right now

  • Show detailed place information, including the place’s name, address, phone number, and website

  • Use autocomplete to save your users time and frustration typing out place names, by automatically completing them as they type

  • Make your app stand out by adding new places that are relevant to your users and seeing the places appear in Google's Places database

  • Improve the map around you by reporting the presence of a device at a particular place.


To get started with the Places API for Android, watch this DevByte, check out the developer documentation, and play with the demos. To apply for the Places API for iOS beta program, go here.



Take your apps on the road with Android Auto

Posted by Wayne Piekarski, Developer Advocate



Starting today, anyone can take their apps for a drive with Android Auto using Android 5.0+ devices, connected to compatible cars and aftermarket head units. Android Auto lets you easily extend your apps to the car in an efficient way for drivers, allowing them to stay connected while still keeping their hands on the wheel and their eyes on the road. When users connect their phone to a compatible vehicle, they will see an Android experience optimized for the head unit display that seamlessly integrates voice input, touch screen controls, and steering wheel buttons. Moreover, Android Auto provides consistent UX guidelines to ensure that developers are able to create great experiences across many diverse manufacturers and vehicle models, with a single application available on Google Play.



With the availability of the Pioneer AVIC-8100NEX, AVIC-7100NEX, and AVH-4100NEX aftermarket systems in the US, the AVIC-F77DAB, AVIC-F70DAB, AVH-X8700BT in the UK, and in Australia the AVIC-F70DAB, AVH-X8750BT, it is now possible to add Android Auto to many cars already on the road. As a developer, you now have a way to test your apps in a realistic environment. These are just the first Android Auto devices to launch, and vehicles from major auto manufacturers with integrated Android Auto support are coming soon.



With the increasing adoption of Android Auto by manufacturers, your users are going to be expecting more support of their apps in the car, so now is a good time to get started with development. If you are new to Android Auto, check out our DevByte video, which explains more about how this works, along with some live demos.





The SDK for Android Auto was made available to developers a few months ago, and now Google Play is ready to accept your application updates. Your existing apps can take advantage of all these cool new Android Auto features with just a few small changes. You’ll need to add Android Auto support to your application, and then agree to the Android Auto terms in the Pricing & Distribution category in the Google Play Developer Console. Once the application is approved, it will be made available as an update to your users, and shown in the cars’ display.



Adding support for Android Auto is easy. We have created an extensive set of documentation to help you add support for messaging (sample), and audio playback (sample). There are also short introduction DevByte videos for messaging and audio as well. Stay tuned for a series of posts coming up soon discussing more details of these APIs and how to work with them. We also have simulators to help you test your applications right at your desk during development.



With the launch of Android Auto, a new set of possibilities are available for you to make even more amazing experiences for your users, providing them the right information for the road ahead. Come join the discussion about Android Auto on Google+ at http://g.co/androidautodev where you can share ideas and ask questions with other developers.



Followers