This application uses 2 APIs from the Google Cloud Platform to fetch and display data and imagery about real world locations which correspond to points that users select on the globe.
Geocoding API: Fetches information about a location on the globe.
Map Tiles API: Fetches street view panoramic imagery from a location on the globe.
Core Files
The core files for this integration are located within the directory app/src/main/java/com/meta/pixelandtexel/geovoyage/services/googlemaps and its subfolders.
Reverse geocoding, as described in the official API documentation, involves translating a map location into a human-readable address. In the Explore play mode of this application, the reverse geocoding service activates when a user drops a pin on the map. The application uses the address or location information returned from the API request to send a more informed query to the Llama server. For additional details on the templated Llama queries used in this application, refer to the relevant documentation page.
The service also functions as a validation or filtering mechanism. It excludes locations without known information or data on the Google Maps platform, such as areas in the middle of the Pacific Ocean, from being included in a Llama query.
The primary method for utilizing this reverse geocoding service in the application is through the GoogleMapsService.getPlace function. This function requires two arguments: a GeoCoordinates instance and an IGeocodeServiceHandler instance. The latter handles callbacks from the service and manages the events and results. The getPlace function operates as follows:
Builds the request URL using the formatted geocoordinates as query parameters.
Returns early if no results are found.
Iterates through the results and extracts a name for the first place returned.
Note that in this application, the reverse geocode results are filtered by the place types included in the resultTypeFilter array below. The function succeeds and returns the formatted_address or long_name of the result, provided that at least one place of the specified types is returned from the API.
private val resultTypeFilter =
listOf("country", "political", "natural_feature", "point_of_interest")
Unlike other services used by this application, which involve long-running operations, this service uses Kotlin coroutines to facilitate asynchronous execution for quick API calls.
Example usage
You can see an example of usage in the startQueryAtCoordinates function located in the /app/src/main/java/com/meta/pixelandtexel/geovoyage/viewmodels/ExploreViewModel.kt file, where UI updates and error handling are implemented. It is important to handle errors gracefully and update the UI visible to the user appropriately.
GoogleMapsService.getPlace(coords, object : IGeocodeServiceHandler {
override fun onFinished(place: String?) {
if (place.isNullOrEmpty()) {
// no location found; don't query llama
return
}
// submit templated query to llama, with place name injected into query
}
override fun onError(reason: String) {
// handle request error
}
})
Map tiles imagery for MR to VR transition
In the Explore play mode, when the user drops a pin on the globe, the Google Maps Tiles API fetches metadata about panoramic images near that location. The API then displays the panorama in the headset using the Street View Tiles endpoints.
Methodology
This is the process used to display a panorama in-headset.
Create a new session that includes a token necessary for further API requests.
Fetch the metadata for a panoramic image closest to a set of geographical coordinates within a specified radius.
Determine the appropriate zoom level and image tiles needed to compose the full panoramic image.
Fetch the tiles as bitmaps.
Combine the bitmaps into a larger single bitmap.
Assign this bitmap image to a Spatial SDK skybox entity material.
Getting the panorama image’s metadata
The GoogleTilesService.getPanoramaDataAt function is where the application uses the Street View Tiles API. The function accepts a pair of geo coordinates and a search radius around those coordinates. The streetview/metadata endpoint then fetches the metadata for the panoramic image that’s closest to those coordinates and within the radius, provided such an image exists.
data class PanoMetadata(
val panoId: String,
// image and tile dimensions for fetching images to compose the full image
val imageHeight: Int,
val imageWidth: Int,
val tileHeight: Int,
val tileWidth: Int,
// data about the image that Google requires you to display
val copyright: String,
val date: String?,
val reportProblemLink: String,
)
Example usage:
val coords = GeoCoordinates(37.485073f, -122.150856f)
var panoData: PanoMetadata? = null
CoroutineScope(Dispatchers.Main).launch {
panoData = GoogleTilesService.getPanoramaDataAt(coords, 10000)
}
if (panoData.value != null) {
// display the metadata, and/or fetch and display the panorama bitmap
}
In this application, the API request occurs in the background when you drop a pin on the globe. If a panoramic image is available, the VR Mode button on the Explore panel becomes enabled. Selecting this enabled button triggers the next step in the implementation.
Note: The Map Tiles API usage policy requires you to display certain information when using their API in your application. In the PanoMetadata response, the last three properties appear on the Explore panel, as shown below. Selecting the Report a problem with this image link opens a web browser view to the specified URL.
Next, you pass the fetched metadata to the GoogleTilesService.getPanoramaBitmapFor function. The function accepts the metadata object and an object that implements the IPanoramaServiceHandler interface to receive callbacks with the resulting Bitmap object or an error message.
interface IPanoramaServiceHandler {
fun onFinished(bitmap: Bitmap)
fun onError(reason: String)
}
Example usage:
GoogleTilesService.getPanoramaBitmapFor(
panoData,
object : IPanoramaServiceHandler {
override fun onFinished(bitmap: Bitmap) {
// set the skybox entity's material albedo texture to the bitmap
}
override fun onError(reason: String) {
// handle request error
}
}
)
Determining the zoom level and tiles to fetch
The streetview/tiles endpoint cannot fetch a full-size street view panoramic image in a single request. Instead, you must perform a series of requests, specifying a zoom level and the x/y indices of the tile you are fetching, then stitch them together to create a combined bitmap image. Zoom levels range from 0 to 5 and determine the approximate field of view for the tile image you are fetching. A higher zoom level results in a larger combined image with greater visible detail, requiring more tile image network requests. Conversely, a lower zoom level results in a lower visible quality combined image when viewed in a 360 view, but requires fewer network requests.
In this application, to determine the desired zoom level, calculate the largest image you can load while keeping the number of network requests needed to fetch the entire image below the predefined threshold MAX_TILE_FETCHES_PER_PANO, defined in GoogleTilesService. This methodology helps control the overall size and visible detail of the image, while also managing the number of network requests you must wait to complete before viewing the panorama in the headset.
The steps to calculate that zoom level are as follows:
Calculate the number of tiles needed to fetch the full resolution panorama image using the image and tile width/height values in the metadata.
Determine the zoom level needed to fetch the full resolution panorama image by using the approximate field of view values for each zoom level.
Working backwards from that full resolution zoom level, decrease the calculated zoom level until the number of network requests required to fetch all tiles composing the entire image at that zoom level is less than or equal to MAX_TILE_FETCHES_PER_PANO.
Note: The panoramic images available from this API come from two sources: Google and public user-generated content. Additionally, the panorama images can vary in size. This application accounts for those different sizes by using the size values in the metadata to determine how the image tiles are fetched and stitched together.
Fetching and combine the tiles
After the zoom level has been determined, the service concurrently fetches all of the tile images using tile coordinates and the zoom level. This implementation uses Kotlin Coroutines and async/await to accomplish asynchronous execution, and the OkHttp library and BitmapFactory.decodeStream function to fetch and create the Bitmap objects.
val numTotalTiles: Int = numTilesX * numTilesY
Log.d(TAG, "Begin fetching $numTotalTiles image tiles at zoom $zoom")
// fetch all of the tiles
val tilesFetches = (0 until numTotalTiles).map { i ->
val x = i % numTilesX
val y = i / numTilesX
async(Dispatchers.IO) {
getTileImage(metadata.panoId, x, y, zoom)
}
}
val tiles = tilesFetches.mapNotNull { it.await() }
if (tiles.size < numTilesX * numTilesY) {
throw Exception("Failed to get all tile images")
}
After all of the tiles have been fetched, this implementation iterates through the list of tiles and copies their content to a combined Bitmap object.
val combinedBitmap =
Bitmap.createBitmap(fullWidth, fullHeight, Bitmap.Config.ARGB_8888)
val canvas = Canvas(combinedBitmap)
var src: Rect
var dst: Rect
for (y in 0 until numTilesY) {
for (x in 0 until numTilesX) {
// calculate the right and bottom boundaries for partial tiles
val rightEdge = minOf((x + 1) * tileWidth, fullWidth)
val bottomEdge = minOf((y + 1) * tileHeight, fullHeight)
// skip tiles outside the bounds of the combined bitmap
if (x * tileWidth >= fullWidth || y * tileHeight >= fullHeight) {
continue
}
val tileIdx = y * numTilesX + x
val tile = tiles[tileIdx]
// now calculate the source and destination rects, only drawing the visible portion
dst = Rect(x * tileWidth, y * tileHeight, rightEdge, bottomEdge)
src = Rect(0, 0, dst.width(), dst.height())
Log.d(
TAG,
"draw src rect $src from tile[$tileIdx] to combined bitmap at dst rect $dst"
)
canvas.drawBitmap(tile, src, dst, null)
}
}
Displaying the panorama
With the combined Bitmap object now returned from the GoogleTilesService object, you can display the 360 degree panorama in the headset. To do so, set the Spatial SDK skybox Entity’s mesh material albedo texture to the Bitmap, and use the Visible component to set the Entity to be visible.
skyboxEntity.setComponent(Visible(false))
GoogleTilesService.getPanoramaBitmapFor(
panoData,
object : IPanoramaServiceHandler {
override fun onFinished(bitmap: Bitmap) {
if (!skyboxEntity.hasComponent<Mesh>() ||
!skyboxEntity.hasComponent<Material>()
) {
// handle skybox entity missing mesh or material
return
}
val sceneMaterial = skyboxSceneObject!!.mesh?.materials?.get(0)
if (sceneMaterial == null) {
// handle skybox scene object mesh material not found
return
}
// destroy our old skybox texture
sceneMaterial.texture?.destroy()
// set our new texture
val sceneTexture = SceneTexture(bitmap)
sceneMaterial.setAlbedoTexture(sceneTexture)
skyboxEntity.setComponent(Visible(true))
}
override fun onError(reason: String) {
// handle panorama fetch error
}
}
)
In this application, the skybox entity is defined in the scene.xml file. The skybox entity isn’t visible until the user drops a pin on the globe during the Explore play mode:
In addition to the number of network requests required to construct a 360 bitmap of the panorama, consider the cost implications when selecting a zoom/quality level for your application. Each tile image request incurs a cost, which can accumulate quickly when viewing many high-resolution panoramas. Moreover, each Google project has a daily quota for the maximum number of image tile requests. Exceeding this limit will render the API inaccessible for the remainder of the day. For this application, a relatively high threshold for the number of network requests was established to fetch higher-resolution images, albeit at an increased cost.
Although it is out-of-scope for this application, employing a technique to fetch higher zoom level images in the direction the user is facing could reduce costs. The metadata for each panorama includes the heading, tilt, and roll information. If your application is designed for a headset, you could feasibly fetch higher quality images within the virtual camera’s field of view and lower quality images outside of it. When the user turns their head, recalculate the tiles within the field of view and replace any lower resolution tiles with higher resolution ones. You can observe this technique in action by opening a panorama image in street view on google.com/maps and quickly panning the camera left or right.