Animal Observer for iPad

Nov 2016iOS

 

Overview

platform iOS / iPad
name  Animal Observer
function  Health & Behaviour Observations
customer Dian Fossey Gorilla Fund Intl
features 3D animal viewer
 Drag & drop artefacts
Package management platform
Image zoom/resize/rotate with Core Graphics
 JSON export
 Objective C modernisation

Description

Animal Observer is an iPad application for recording behavioral observations, activity budgets and animal health monitoring.  The app is currently being used by researchers in Rwanda and Congo to study eastern gorillas.  It provides 3D models of the animals so researchers can annotate health information using a drag and drop UI.

 

Detail

Video PreviewFunctional OverviewTechnical DetailScreenshots

The Animal Observer project already existed when we were approached. We were asked to add functionality to the app and update a number of the existing features. Our work involved the following tasks.

Code Modernisation

Updating and modernising the Objective C codebase from the iOS 5 SDK (to iOS 10) including fixing all warnings, errors and converting to auto layout .

Health Summaries

Adding Health summaries to the setup screens. This meant for any subject selected, a variable length health summary was displayed in an accompanying tableview.

Health Module

Creating a Health module which included:

  • a performant 3D viewer for the subject animals with full rotation over 360 frames plus top/bottom views
  • the ability for the subject to be rotated precisely with finger panning or advancing 20 frames at a time using back/forward buttons
  • support for 1x and 2x zoom modes
  • a menu of health concerns where items could be dragged onto the subject
  • the health concerns consisted of a wide range of injury or lesion types represented by graphics
  • popover previews for each lesion type
  • the ability to zoom, rotate and resize the lesions for more precise placement on the subject
  • the ability to record a lesion’s position, frame, size, rotation and type
  • the ability to record attributes for each lesion from list of ~30 options
  • the ability to take photos (or use images stored on the device) and associate them with each lesion
  • a gallery preview of all added photos per lesion
  • storing health info in CoreData for each frame of each subject for any number of species
  • the ability to export all observation data as structured JSON

Package Management

The task was to abstract the app’s datasource so any species of animal could be managed. A UI was created so loading species worked in a similar way to buying apps in the AppStore.

For any given species, generic data packages (zip archives) could be installed into the app. The packages:

  • contained info about a specific collection of animal subjects with text and image data about each one
  • supported the ability for default images and data to be supplied when not overwritten by package data
  • contained a list of injury data relevant to the species

Any number of packages could be loaded into the app at runtime, only one could be set to active.

Packages could be loaded, made active or deleted. Researchers could update the JSON files within each package to modify the number of subjects or any info about them. When loaded into the app the UI updated dynamically based on the currently active package.

In addition to providing dynamic content, the package managements feature was also responsible for:

  • parsing and validating package data and assets including 3d models
  • checking the device for sufficient disk space

File And Data Management

The project involved extensive file management: copying, deleting and renaming files, unzipping archive files, parsing and validating JSON files. Several example packages were created to demonstrate the various possible permutations of supplied and default data.

Frameworks Used

  • UIKit
  • Foundation
  • Core Graphics
  • MapKit
  • CFNetwork
  • CoreData
  • OpenGLES
  • CoreVideo
  • QuartzCore

Data Hierarchy

One of the app’s more challenging requirements was the need to support optional data. Default data was embedded in the app and consisted of structured data and images supporting the gorilla species. Packages were parsed and then loaded into memory and consisted of 4 main JSON files, and various folders of images representing lesion and subject data. All aspects of package data were optional, to the point where a package could be an empty zip file, i.e. foo.zip, from which the app would derive only the species name, and all default data would be presented, i.e. gorilla-related info.

Also while modifying the health JSON data to support the multi-package format, the structure needed to become considerably more complex. The finalised sample data consisted of 5 dimensions and over 2k elements.

Complex View Hierarchy

3D Viewer Performance

The client provided hi res images and we needed to be able to show a smooth animation from the 360 frame sequence. Each image was a 1200 px square PNG graphic, approx 400 kb in size. I had to ensure the data pre-loaded quickly and the rotation animation ran smoothly .

Managing Paging for 3D Viewer

This was an interesting UI and usability challenge. Lesions needed to be dropped onto specific frames but if the drop target was, e.g., frame 278 and the user navigated to 279, the lesion would disappear. I solved this by dividing the rotation sequence into 20-frame bands, or segments. Whichever frame the lesion was dropped onto, it would continue to show 10 frames to either the left or right. This also solved the problem of requiring the user to navigate to precise frames, something that would be difficult with a finger panning gesture. I added the option of just navigating left and right through the 20 frame segments, using back/forward buttons. See the video for an example of the completed UI’s ease of use.

KVO Data Source

Changing the data source, i.e. the currently selected species, invoked a large number of updates in the UI and model. I modelled this using KVO listening to a NSMutableArray datasource. By default KVO doesn’t work with collections so I overcame this by using KVC and mapping array modification methods to the datasource property.

Snapshotting OpenGL Views

The 3D animal viewer uses OpenGL to overcome the performance issues inherent in displaying a large number of image frames in rapid succession. For the zoom feature, the current frame needed to be snapshotted and enlarged to 2x. By default UIKit can only snapshot UIVIew and not OpenGL views. Using a combination of Core Graphics and UIKit’s (BOOL)drawViewHierarchyInRect:(CGRect)rect afterScreenUpdates:(BOOL)afterUpdates; I overcame this obstacle.

Panning and Zooming

Even mainstream apps like MindNode have not solved this issue very well. The challenge is how do you differentiate when the user is panning the background view which is zoomed in, and the subject view. I solved this by creating a ‘pan lock’ mode. When the user is in 2x mode and editing, by default the background layer disables scrolling. The UI indicates a lock enabled icon which the user can tap to unlock when needed.

Dynamic Height Cells

The health info is displayed in a tableview where each cell can be a different height, i.e., depending on the amount of lesions saved per subject. I solved this by subclassing UITableViewCell and dynamically calculating the height in the - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath; delegate method.

PaintCode

Icons in the Health module were created using PaintCode so are resolution independent.