That Queen song pops into your head—“Under Pressure”—as you begin to sweat the details on the next phase of your mobile web application: adding photo capture to your web app.
Just to start a total body sweat, you think back on your original design requirements and what you told the boss you could do:
- Real-time access to IBM i data and local storage when no mobile connectivity is available
- Ability to take pictures, store them on the device, and upload them
- Ability to retrieve and store geolocation information
- Ability to record audio for taking “audio notes”
Gads, how are you going to pull this off? The one thing that you have going for you is that the company issues Android mobile devices, so you won't have to deal with the many limitations of an Apple device when it comes to web apps storing files on the local mobile device. So one hurdle of mobile cross-device compatibility, file storage, won't be something that you need to worry about.
You also can leverage local storage, as you did when you accomplished your first phase of the web app. Since you don't have an option for “blob” storage, and you’re not sure you would want use it if you did, you take the following approach:
- Grab the picture.
- Store the picture as a BASE64 encoded string (we love strings and know how to use 'em!)
- Stage the picture (as string) for upload to the IBM i.
Let's tackle them one at a time.
Grab a Picture
Browser capabilities have been rapidly advancing, and access to local device resources has been rapidly improving as well. Web RTC (Real Time Communications) has been around for five years, but it has been rapidly evolving and thus adoption has been slow. But it has been slowly making its way into browser versions to the point where it is now “safe” to use it in production (and, please don't tell me you are still using IE6...). The latest version of Chrome stingily only allows access from secure locations, so testing on a non-secure test server is out with Chrome. Firefox is more generous.
There are a couple of changes being implemented currently. The navigator.getUserMedia method and its browser-specific derivatives are being deprecated in favor of the navigator.mediaDevices.getUserMedia method. They function in a similar manner. Both will prompt the user for permission to access the camera, for example. But the navigator.mediaDevices.getUserMedia returns a JavaScript Promise object, which makes avoiding “callback hell” easier.
An example would be:
//asking for audio and video access
var p = navigator.mediaDevices.getUserMedia({ audio: true, video: true });
p.then(function(mediaStream) {
var video = document.querySelector('video');
video.src = window.URL.createObjectURL(mediaStream);
video.onloadedmetadata = function(e) {
// Do something here.
};
});
JavaScript Promises are cool, and they’re worth exploring when it comes to asynchronous calls. Basically, the code says: “If the call is successful, then....do something.”
The method above or a method that uses something like below both return a stream from the camera.
var media = navigator.getUserMedia(constraints, function(stream){
// URL Object is different in WebKit
var url = window.URL || window.webkitURL;
// create the url and set the source of the video element
video.src = url ? url.createObjectURL(stream) : stream;
// Start the video
video.play();
videoPlaying = true;
}
The best way to know (and show) this data is to display it in a canvas object in your HTML. The code is very simple, and there are examples scattered all over the Internet. I borrowed heavily from here and also made use of code found here. Both of those examples show how drop-dead simple it is to view the stream from a web cam and capture a snapshot.
But we have one more step we need to add to integrate it into our solution: We need to store the photo!
Store It as a String
On the face of it, I thought I might have some complexities to deal with when it came to storing the photo. If I store the image file on the mobile device, I would also need to keep track of the location and name of the file so that it could be stored, retrieved, viewed, and transferred to the IBM i. That would mean a couple of I/Os, and there would some data that I would need to keep in sync: the file location and name always has to match where the file was stored and what the file was named. You get those two things out of whack and you basically “lost” your photo. What simplified the process was the canvas object has a method: toDataURL(). This method of the canvas object is one that converts the canvas drawing into a 64-bit encoded png URL. You can also pass 'image/jpeg' as the first parameter and save the image as a jpeg. The BASE64 encoding is a string representation of the binary image. And the encoding is small, even for a moderately sized image. And we already have a great format for our text data (JSON). So the simple approach just means we store the BASE64 string with a value (photo name perhaps) and stuff it into the local storage object in our browser.
But how do we retrieve and display the photo? Reverse the process! Retrieve the BASE64 string from local storage, and then set the “src” attribute of the image tag where you want the photo to display. Done!
We already sorted out the synchronization of data between the IBM i and the mobile device in our first task (think JSON and AJAX), so we just add one more object to our synchronization routine. Nothing could be simpler than storing strings in a DB2 file on IBM i.
Breathing a sigh of relief, you crank up Queen's “Hot Space” album and kick back. No pressure here!
LATEST COMMENTS
MC Press Online