For whom this post is? We’re explaining how to create an Augmented Reality video playback experience using the Wikitude JavaScript API. We primary target Adobe AIR developers who are using the AR ANE in their projects, but anyone else who likes to learn more about the Wikitude JavaScript API can be the target of this tutorial. Cheers!
So you like to learn how to use the Wikitude JavaScript API? You’re in the right place then! In this tutorial we get you familiar with the Wikitude’s JS API by creating a simple Augmented Reality Video Player. So we assume that you already have implemented the Wikitude’s SDK into your app successfully (the Augmented Reality Adobe AIR Native Extension) and now ready to create an Augmented Reality experience by setting up your custom HTML, JavaScript, CSS, and .wtc Wikitude’s tracker file.
So if you’re not still ready to use the AR ANE, please click here to read its wiki before continuing this tutorial.
Getting Started
I know! The Wikitude’s documentation can be a little bit confusing, specially when you didn’t have any experiences in AR. But don’t worry, I’ve been there as well! So in this tutorial we’re going to setup our JavaScript Augmented Reality experience from scratch and explain everything step by step. Cheers!
So to get started, do the following:
Click here to visit our AR Github repository, download it, and copy the files inside of the “AIR/assets/11_Video_5_VideoPlayback” directory. It’s actually our “Video Player” AR experience sample project that we’re going to explain in this tutorial.
Paste the files inside the “Wikitude” directory (where the Wikitude SDK is going to read your HTML, JS, CSS, and other files) which is on your device’s SD card.
Now run your app (of course your app must have already prepared the Augmented Reality experience by using the AR ANE and opened the AR camera) and scan one of the below targets. You should be able to see the video player Augmented Reality experience.
Simply click the play button to play the video, click the video itself to play/pause, or move your device’s camera to see that the video pauses automatically when the target gets out of the camera view-port, and plays when it gets back.
Understating the Folder Structure
Now let’s see how our sample actually is working!
Here is how we have structured and organized our files, but of course we could do it in any other way that we liked:
“assets” directory which contains the .wtc tracker file, images, and etc.
“css” directory which contains our HTML page stylesheet.
“js” directory which contains our JavaScript codes.
“index.html” file which is the main file that the Wikitude SDK reads.
Setup the HTML File
The most important file is the “index.html” file. As it’s the first file which Wikitude reads to load your Augmented Reality experience. Well, it’s just like any other HTML file.
Here is a starter template:
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<meta content="width=device-width,initial-scale=1,maximum-scale=5,user-scalable=yes" name="viewport">
<title></title>
<!-- You need to include the following JavaScript libraries. -->
<script src="https://www.wikitude.com/libs/architect.js"></script>
<script type="text/javascript" src="../ade.js"></script>
<!-- Our custom stylesheets, if we need any! -->
<link rel="stylesheet" href="css/default.css">
</head>
<body>
<!-- Our custom JavaScript codes which use the Wikitude JavaScript API -->
<script src="js/videoplayer.js"></script>
</body>
</html>
As you can see we have already included all of the necessary Wikitude’s JavaScript libraries in head . That’s it! Our HTML is now ready to work with the Wikitude JavaScript API. Cheers!
See how simple is that? That’s actually it. Now we can optionally add some HTML elements in our view to build a better user experience. Such as a close button to close the AR camera, a loading message, and an informative message which immediately lets the user know to scan what targets to view the augmented reality video player.
So we add the following just after opening our body tag:
<div class="close">
<a id="close" href="#"><img src="assets/btn_close.png" alt="Close"></a>
</div>
<div id="loading" class="info">Loading ...</div>
<div id="targets" class="info hidden">
<p>To begin, scan one of the following targets:</p>
<div>
<img src="assets/marker_h_01.jpg" alt="Marker 01">
<img src="assets/marker_h_02.jpg" alt="Marker 02">
</div>
</div>
As you can see, all of the elements have a CSS class to be styled in our “css/default.css” file. So we set their style and position in the view by CSS.
Our HTML is ready for our sample project at the moment, but before we move on, let me also mention that we can enable the Wikitude’s logging console while we’re developing our AR experience. Because sometimes we need a console to debug our JavaScript codes, right? That’s why the Wikitude’s JavaScript API comes equipped with a logging console out of the box. To enable it, you just need to initialize it on your page load.
Heads up! Remember to disable the logging console for production.
We simply put the following code before closing our body tag:
Now we can send a message to the Wikitude’s console at run time when the Augmented Reality experience is running by calling the Wikitude’s logger class methods. E.g. AR.logger.debug(‘msg’); .
Let’s Write Our Custom JavaScript Code
Wikitude has so many JavaScript classes that all of them together help you build variety of incredible AR experiences with ease. It also has lots of samples which can speed up your app development. In this tutorial we’re only explaining one of these samples in simple terms to get you started even faster without needing you to look over the Native Android or iOS documentations to just find out how to run your JavaScript codes successfully!
Click here to view Wikitude’s JavaScript API documentation reference.
Alright, if you take a look at the “js/videoplayer.js” file that you have downloaded at the beginning of this tutorial, you can easily figure out everything by reading the codes and our comments there. But here we explain the most important parts as well as the codes.
First of all, we create a global JavaScript variable and name it World . It has different properties and methods, we will explain them as we continue.
var World = {
loaded: false,
init: function initFn() {
this.createOverlays();
this.setupCloseBtn();
},
createOverlays: function createOverlaysFn() {
// do sth
},
setupCloseBtn: function setupCloseBtnFn() {
// do sth
}
};
World.init();
As soon as we call the init function of the World object, we start setting up our Augmented Reality world by calling the createOverlays function, and add our close button’s listener to actually close the AR camera by calling the setupCloseBtn .
Adding the close button’s listener is easy, but in the createOverlays function we shall do the following in summary:
We initialize AR.TargetCollectionResource and AR.ImageTracker in order to start the recognition engine and load all of the target images.
Then we simply initialize drawables which are going to be displayed on the target images in the AR experience. So we initialize AR.ImageDrawable and AR.VideoDrawable .
Finally we combine everything together by defining the AR.ImageTrackable . We simply provide it with our tracker and drawables that should be displayed on the recognized target images. And we’re good to go!
So let’s start the recognition engine:
this.targetCollectionResource = new AR.TargetCollectionResource('assets/myflashlabs.wtc', {});
this.tracker = new AR.ImageTracker(this.targetCollectionResource, {
onTargetsLoaded: this.worldLoaded
});
First you have to prepare your tracker file and initialize it by AR.TargetCollectionResource . We have logged in to the Wikitude Studio Manager and created a new tracker file (assets/myflashlabs.wtc).
An AR.ImageTracker needs to be created by using the already initialized AR.TargetCollectionResource , in order to start the recognition engine. When the tracker loads all of its target images, the callback, this.worldLoaded , will be fired.
Now let’s create the augmentation by setting our drawables. We’re actually initializing two drawables in our AR experience:
The “playBtn” which is a AR.ImageDrawable . It will be a clickable button which appears on the scanned target image, so that you can click it to start playing the video.
The “video” which is a AR.VideoDrawable . It will be the video which is going to be played.
The Play Button Drawable
We use an image resource to pass it to AR.ImageDrawable , which is a visual component that can be connected to AR.ImageTrackable or AR.GeoObject . The AR.ImageDrawable is initialized by the image, its size, and optional parameters. Now when the AR.ImageDrawable which is actually our play button is clicked, it will play the video.
var playImg = new AR.ImageResource('assets/circle-black_play.png');
var playBtn = new AR.ImageDrawable(playImg, .3, {
enabled: false, // The button is disabled initially.
zOrder: 2,
translate: { y: 0 },
onClick: function() {
video.play(1); // Play the video once.
}
});
The Video Drawable
Let’s initialize the AR.VideoDrawable as well. In order to have an overlay video (augmented reality video), you should use H.264 encoded videos with baseline profile in a mp4 container. Click here to read more.
Heads up! As with all resources the video can be loaded locally from the SD Card (like this example) or remotely from any server.
As you can see in the codes, as soon as the video is loaded, it will enable the play button (as it was disabled or let me say hidden initially). Then when user click the button, the onPlaybackStarted event of our video will be fired, and inside of it we will disable the button again. We do the opposite when the onFinishedPlaying event of the video is fired.
We also add a click event for the video as well, to resume and pause the video playback whenever user clicks on the video.
var video = new AR.VideoDrawable('assets/myflashlabs.mp4', .4, {
zOrder: 1,
translate: { y: playBtn.translate.y },
isPlaying: false,
isFinished: false,
onLoaded: function() {
playBtn.enabled = true; // Enable the button when the video is loaded.
},
onPlaybackStarted: function() {
playBtn.enabled = false; // Disable the button again when video is started playing.
video.isFinished = false;
video.isPlaying = true;
},
onFinishedPlaying: function() {
playBtn.enabled = true; // Enable button when video is finished.
video.isFinished = true;
video.isPlaying = false;
},
onClick: function() {
if (video.isFinished) return;
if (video.isPlaying) {
video.pause();
video.isPlaying = false;
} else {
video.resume();
video.isPlaying = true;
}
}
});
Combine Everything Together
Finally, let’s combine everything together by defining the AR.ImageTrackable . We simply provide it with our tracker and drawables that should be displayed on the recognized target images. In our example we use ‘*’ as the target name, because we like the same video to be displayed when user scans any of our targets. If we liked to do different things according to the scanned target, we had to initialize AR.ImageTrackable two times, and each time provide it the necessary target name.
Heads up! We used ‘*’ as the target name. That means that the AR.ImageTrackable will respond to any target that is defined in the target collection. You can use wildcards to specify more complex name matchings. E.g. ‘target_?’ to reference ‘target_1’ through ‘target_9’ or ‘target*’ for any targets names that start with ‘target’.
As you can see we resume the video playback when the onEnterFieldOfVision event is fired if the video was already being played; because that’s when the target image come into the AR camera view-port. We also pause it when the onExitFieldOfVision event is fired; because that when the target image is out of the AR camera view-port, and we don’t like the video to continue playing while the user is not watching it!
var pageOne = new AR.ImageTrackable(this.tracker, '*', {
drawables: {
cam: [ video, playBtn ]
},
onEnterFieldOfVision: function() {
if (video.isPlaying) {
video.resume();
}
},
onExitFieldOfVision: function() {
if (video.isPlaying) {
video.pause();
}
}
});
Final Touches
Alright, we’re done with the createOverlays function, let’s also take care of the setupCloseBtn and worldLoaded functions.
setupCloseBtn: function setupCloseBtnFn() {
// Add the close button's listener
document.getElementById('close').addEventListener('click', function(e) {
AR.platform.sendJSONObject( { action: 'close_ar' });
});
},
worldLoaded: function worldLoadedFn() {
// Now the tracker is loaded, let's remove the `loading` element from HTML,
// and show user the targets to scan instead.
document.getElementById('loading').classList.add('hidden');
document.getElementById('targets').classList.remove('hidden');
// Let's also remove the 'targets' element after 5 sec to clear the view.
setTimeout(function() {
document.getElementById('targets').classList.add('hidden');
}, 5000);
}
At the end, our codes should look like this:
var World = {
loaded: false,
init: function initFn() {
this.createOverlays();
this.setupCloseBtn();
},
createOverlays: function createOverlaysFn() {
this.targetCollectionResource = new AR.TargetCollectionResource('assets/myflashlabs.wtc', {});
this.tracker = new AR.ImageTracker(this.targetCollectionResource, {
onTargetsLoaded: this.worldLoaded
});
var playImg = new AR.ImageResource('assets/circle-black_play.png');
var playBtn = new AR.ImageDrawable(playImg, .3, {
enabled: false, // The button is disabled initially.
zOrder: 2,
translate: { y: 0 },
onClick: function() {
video.play(1); // Play the video once.
}
});
var video = new AR.VideoDrawable('assets/myflashlabs.mp4', .4, {
zOrder: 1,
translate: { y: playBtn.translate.y },
isPlaying: false,
isFinished: false,
onLoaded: function() {
playBtn.enabled = true; // Enable the button when the video is loaded.
},
onPlaybackStarted: function() {
playBtn.enabled = false; // Disable the button again when video is started playing.
video.isFinished = false;
video.isPlaying = true;
},
onFinishedPlaying: function() {
playBtn.enabled = true; // Enable button when video is finished.
video.isFinished = true;
video.isPlaying = false;
},
onClick: function() {
if (video.isFinished) return;
if (video.isPlaying) {
video.pause();
video.isPlaying = false;
} else {
video.resume();
video.isPlaying = true;
}
}
});
var pageOne = new AR.ImageTrackable(this.tracker, '*', {
drawables: {
cam: [ video, playBtn ]
},
onEnterFieldOfVision: function() {
if (video.isPlaying) {
video.resume();
}
},
onExitFieldOfVision: function() {
if (video.isPlaying) {
video.pause();
}
}
});
},
setupCloseBtn: function setupCloseBtnFn() {
// Add the close button's listener
document.getElementById('close').addEventListener('click', function(e) {
AR.platform.sendJSONObject( { action: 'close_ar' });
});
},
worldLoaded: function worldLoadedFn() {
// Now the tracker is loaded, let's remove the `loading` element from HTML,
// and show user the targets to scan instead.
document.getElementById('loading').classList.add('hidden');
document.getElementById('targets').classList.remove('hidden');
// Let's also remove the 'targets' element after 5 sec to clear the view.
setTimeout(function() {
document.getElementById('targets').classList.add('hidden');
}, 5000);
}
};
World.init();
Conclusion
So that’s all there’s to it! In this tutorial you learned how to create a simple Augmented Reality Video Player using the Wikitude JavaScript API. Now you know how to create your own tracker file, where to find the Wikitude JavaScript API reference, write the basic codes, and make a real working example of an augmented reality experience.
Hope this tutorial deserved you well, and helped you speed up your workflow 🙂
Create A Simple Augmented Reality Video Player using The Wikitude JavaScript API
So you like to learn how to use the Wikitude JavaScript API? You’re in the right place then! In this tutorial we get you familiar with the Wikitude’s JS API by creating a simple Augmented Reality Video Player. So we assume that you already have implemented the Wikitude’s SDK into your app successfully (the Augmented Reality Adobe AIR Native Extension) and now ready to create an Augmented Reality experience by setting up your custom HTML, JavaScript, CSS, and .wtc Wikitude’s tracker file.
So if you’re not still ready to use the AR ANE, please click here to read its wiki before continuing this tutorial.
Getting Started
I know! The Wikitude’s documentation can be a little bit confusing, specially when you didn’t have any experiences in AR. But don’t worry, I’ve been there as well! So in this tutorial we’re going to setup our JavaScript Augmented Reality experience from scratch and explain everything step by step. Cheers!
So to get started, do the following:
Understating the Folder Structure
Now let’s see how our sample actually is working!
Here is how we have structured and organized our files, but of course we could do it in any other way that we liked:
Setup the HTML File
The most important file is the “index.html” file. As it’s the first file which Wikitude reads to load your Augmented Reality experience. Well, it’s just like any other HTML file.
Here is a starter template:
As you can see we have already included all of the necessary Wikitude’s JavaScript libraries in head . That’s it! Our HTML is now ready to work with the Wikitude JavaScript API. Cheers!
See how simple is that? That’s actually it. Now we can optionally add some HTML elements in our view to build a better user experience. Such as a close button to close the AR camera, a loading message, and an informative message which immediately lets the user know to scan what targets to view the augmented reality video player.
So we add the following just after opening our body tag:
As you can see, all of the elements have a CSS class to be styled in our “css/default.css” file. So we set their style and position in the view by CSS.
Our HTML is ready for our sample project at the moment, but before we move on, let me also mention that we can enable the Wikitude’s logging console while we’re developing our AR experience. Because sometimes we need a console to debug our JavaScript codes, right? That’s why the Wikitude’s JavaScript API comes equipped with a logging console out of the box. To enable it, you just need to initialize it on your page load.
We simply put the following code before closing our body tag:
Now we can send a message to the Wikitude’s console at run time when the Augmented Reality experience is running by calling the Wikitude’s logger class methods. E.g. AR.logger.debug(‘msg’); .
Let’s Write Our Custom JavaScript Code
Wikitude has so many JavaScript classes that all of them together help you build variety of incredible AR experiences with ease. It also has lots of samples which can speed up your app development. In this tutorial we’re only explaining one of these samples in simple terms to get you started even faster without needing you to look over the Native Android or iOS documentations to just find out how to run your JavaScript codes successfully!
Click here to view Wikitude’s JavaScript API documentation reference.
Click here to visit Wikitude’s Github samples.
Alright, if you take a look at the “js/videoplayer.js” file that you have downloaded at the beginning of this tutorial, you can easily figure out everything by reading the codes and our comments there. But here we explain the most important parts as well as the codes.
First of all, we create a global JavaScript variable and name it World . It has different properties and methods, we will explain them as we continue.
As soon as we call the init function of the World object, we start setting up our Augmented Reality world by calling the createOverlays function, and add our close button’s listener to actually close the AR camera by calling the setupCloseBtn .
Adding the close button’s listener is easy, but in the createOverlays function we shall do the following in summary:
So let’s start the recognition engine:
First you have to prepare your tracker file and initialize it by AR.TargetCollectionResource . We have logged in to the Wikitude Studio Manager and created a new tracker file (assets/myflashlabs.wtc).
An AR.ImageTracker needs to be created by using the already initialized AR.TargetCollectionResource , in order to start the recognition engine. When the tracker loads all of its target images, the callback, this.worldLoaded , will be fired.
Now let’s create the augmentation by setting our drawables. We’re actually initializing two drawables in our AR experience:
The Play Button Drawable
We use an image resource to pass it to AR.ImageDrawable , which is a visual component that can be connected to AR.ImageTrackable or AR.GeoObject . The AR.ImageDrawable is initialized by the image, its size, and optional parameters. Now when the AR.ImageDrawable which is actually our play button is clicked, it will play the video.
The Video Drawable
Let’s initialize the AR.VideoDrawable as well. In order to have an overlay video (augmented reality video), you should use H.264 encoded videos with baseline profile in a mp4 container. Click here to read more.
As you can see in the codes, as soon as the video is loaded, it will enable the play button (as it was disabled or let me say hidden initially). Then when user click the button, the onPlaybackStarted event of our video will be fired, and inside of it we will disable the button again. We do the opposite when the onFinishedPlaying event of the video is fired.
We also add a click event for the video as well, to resume and pause the video playback whenever user clicks on the video.
Combine Everything Together
Finally, let’s combine everything together by defining the AR.ImageTrackable . We simply provide it with our tracker and drawables that should be displayed on the recognized target images. In our example we use ‘*’ as the target name, because we like the same video to be displayed when user scans any of our targets. If we liked to do different things according to the scanned target, we had to initialize AR.ImageTrackable two times, and each time provide it the necessary target name.
As you can see we resume the video playback when the onEnterFieldOfVision event is fired if the video was already being played; because that’s when the target image come into the AR camera view-port. We also pause it when the onExitFieldOfVision event is fired; because that when the target image is out of the AR camera view-port, and we don’t like the video to continue playing while the user is not watching it!
Final Touches
Alright, we’re done with the createOverlays function, let’s also take care of the setupCloseBtn and worldLoaded functions.
At the end, our codes should look like this:
Conclusion
So that’s all there’s to it! In this tutorial you learned how to create a simple Augmented Reality Video Player using the Wikitude JavaScript API. Now you know how to create your own tracker file, where to find the Wikitude JavaScript API reference, write the basic codes, and make a real working example of an augmented reality experience.
Hope this tutorial deserved you well, and helped you speed up your workflow 🙂
With ♥
Myflashlabs Team
Share this:
Related