Patton Promotion

Time For Your Site

Using AJAX for Page Transitions

This article is still in progress. It is under the Tutorial section, but is posted as much for discussion as anything else. I am posting this tutorial to synthesize the several sources I needed to put this useful tool together. Please take this information with a grain of salt, as this is also my first tutorial. That being said, I hope this is as helpful and complete as possible.

I use: PHP, JavaScript and HTML5 on Apache with .htaccess ModRewrites. If you don’t use those, this may not apply to you.

Reasons not to do this:

AJAX? Page Transitions? I know, I know… bad idea. Why? Here are the common and formidable reasons:

  • Page transitions test your user’s patience, adding to load time
  • They can error out leaving your user on a blank page
  • to have page transitions, you either have to load your whole site onto every page or,
  • You have to use AJAX to call the content as needed, which introduces its own problems including
    • AJAX requests do not traditionally register as a hit, so your visitors register as staying on your home page for a long time
    • you can’t use the back button
    • AJAX requests for more complicated server scripts can get convoluted
    • AJAX was not intended to load the content of a whole page, or any index-required content for that matter.

Now that that is addressed, I will say that page transitions are something I have never really stopped thinking about, although I gave up on it for a long time. There is a lot of discouraging material about using AJAX and/or transitions for page content, but all of that discouragement I found to be cautionary. I have said so many times, ‘there must be a way to make this work’, especially in light of the steady rise in AJAX use by serious content providers like twitter, facebook, and google, just to name a few. After grinding my teeth and doing more and more research, I found that there is support for the AJAX page format, and I believe I have an effective way to make it all come together.

Considerations

First off. I want to point out that the warnings above are prevalent for good reason. There are a lot of issues to address there. I would apply these techniques carefully and selectively. That said, a single page with no reloads offers the user an unprecedented seamless experience. Remember Flash? These types of sites where immersive, engaging, and unprecedented, but limited to ‘vanity’ sites with little dynamic capability. Now with HTML5 and Jquery replacing flash, an engaged site can still be highly functional. Here is what we will accomplish:

  1. Complete a simple AJAX request
  2. Use a consolidated PHP page include to generate our needed content for each page
  3. Use our PHP include as the source for both script and non-script data
  4. Use /#! Hash tags to simulate unique urls
  5. Delaying different stages of the process for more timing control
  6. Saving each page state to the browser, and making our script respond to those states
  7. Sned hits to google analytics with each request
  8. Replicating back and forward functionality
  9. Re-Bind page-relevant event handlers to newly created dom elements

Simple AJAX request

First let’s create a simple AJAX request.

Create a variable to store our requests

{code type=javascript} Var xmlhttp; {/code}

There is an old and new method for creating a request. Use both for backward compatability.

{code type=javascript}

// code for IE7+, Firefox, Chrome, Opera, Safari

if (window.XMLHttpRequest) xmlhttp=new XMLHttpRequest();

// code for IE6, IE5

else xmlhttp=new ActiveXObject(“Microsoft.XMLHTTP”);

{/code}

Now that we can store the response from the request, we need to make a function to run once the request is complete. Notice here and throughout this tutorial I use the jquery library to refer to elements and create effects. Jquery is a javascript library and must be included in you page in order to work. SOURCE

{code type=javascript}

xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState==4 && xmlhttp.status==200) {
$(‘#content-scroll’).html(xmlhttp.responseText);
}
}

{/code}

MORE

At this point, we haven’t actually requested any data. Let’s create a function to make the request.

{code type=javascript}

function makeRequest() {

xmlhttp.open(“GET”,”test.html “,true); //Method, file and true for asyncronous

xmlhttp.send(); //send the request, send string used for post requests only

}

{/code}

Then attach this function on an onCLick or other event (more on this later). javascript: null(0) is required for now, as anything else will prompt a page change. Return false will be important later on.

{code type=HTML}

<a href=”javascript: null(0)” onClick=”makeRequest(‘page’); return false;”>Make Request</a>

{/code}

POST vs GET

Using server side script

Great, we have completed our first request. There is a problem though if we need to include server-side script. All of the PHP executed to load the first page (or the template) is no longer available. This means all variables, arrays, functions etc are destroyed. Thinking about reloading all that code again can cause you a quick headache, but think of it this way. Usually you have script for the template, and script that applies to your pages. If you can separate those 2, and load your page relevant code only when you request a page, you are half way there. This prevents redundant or unnecessary code, processing and requesting only what you need for the page you requested.

Lets create a php preprocessor file to do this job for us (eg AJAXpp.php). For now we only need a simple page include based on our page query string (next step):

{code type=PHP}

$page = $_GET[‘page’]; //the page variable = the  value of our ‘page’ query string

include(“user/pages/$page.php”); //this is not secure yet

{/code}

Now make some pages.php files to be included by AJAXpp.php. Just files like home.php containing the text “Home” are good for now.

Now that we have a file to process the page data, we change our AJAX request

{code type=javascript}

function makeRequest(page) {

xmlhttp.open(“GET”,”AJAXpp.php?page=”+page+”.php”,true); //Method, file and true for asyncronous. .php extension is included so not necessary in the query string

xmlhttp.send(); //send the request, send string used for post requests only

}

{/code}

You could get your page variable to pass as a query string from a lot of places. Here I explicitly define it within the function parameters. This way, you would have to include the page name in every use of the makeRequest() function. Later I will explain more efficient ways to accomplish this.

At this point your script should be making requests to AJAXpp.php?page=some-page and returning the contents of some-page.php through AJAXpp.php. Remember that php is executed server side too, so location (head vs body) does not matter like javascript. This gives you the freedom to include php code for one page, for each page, or for your template separately.

Making your script and noscript server-side requests come from the same source

Now that your AJAX page request are being processed through your AJAXpp file, we need to make it so that non-ajax requests are processed the same way.

Unlike url requests, php includes do not support query strings. Unlike our AJAX request however, the first page to load will be executed along with our initial template code, which means all our original variables are still available. In your template (probably index.php), include a page variable from get like we did in AJAXpp.php.

{code type=PHP}

$page = $_GET[‘page’]; //the page variable = the value of our ‘page’ query string

{/code}

Now that page has been defined for the template, it can be used in AJAXpp to make a non-AJAX call. We just have to change the page definition in AJAXpp

{code type=PHP}

if (!$page) $page = $_GET[‘page’]; //if the page variable is not already set then give it a value

include(“user/pages/$page.php”); //this is not secure yet

{/code}

This change means that AJAXpp will only assign $page a value from GET if it does not already have one ie if this is an AJAX request. If $page (the variable, not the quesry string) is still defined from our initial load, we just want to include that page through AJAXpp.

I hope this is all making sense so far. At this point go ahead and shut off JavaScript to test your urls. if you try the address index.php?page=page you should bring up the content of one of the pages you created. Now if you turn javascript back on, and onClick the makeRequest(‘your-page’) event from before, you should be able to bring up the same page via AJAX. If the products of no-script and script requests show the same result, we are ready to go to the next step.

Handling your Home page

Using hash tags for unique urls

When you click to fire the request and the page appears to change, notice that the address bar does not. This is what we wanted to happen, as the page does not reload, but also leaves us with the problem of no unique urls to identify our individual pages.

GOOGLE SOURCE

Change your links to include #!

In order to change the url each time without firing a request, we use the old hash approach, like so: /#page, but we also include ‘!’ as per google’s specifications. Our link structure, which would normally be /home, /about, /events etc. is now /#!page. the #! does what we need it to, changing the url without reloading the page, and has the added benefit of being recognized as AJAX by crawlers, which means we are on our way to indexability.

We still have a problem though, if a crawler (or any client) requests a /#! url it finds online, what will it see if the server never processes anything after # ?

Google provides an excellent solution to this problem by aliasing ‘#!’ with ‘_escaped_fragment_=’ (strange I know) so that it will in fact be processed by the server! That means that our url structure can now interact with both the server and our AJAX script. Here is what we need to do to make this happen:

URL Rewrites

Normally the server would consider anything with an extension to be a file (.htm, .php) and anything without to be a directory (/blog, /images). The first step is to use ModRewrite to create universal urls and mask the real file location. This is an excellent practice since forward compatability becomes a breeze. These urls can basically be redirected anywhere once they are created.

Search Engines Sources

{code type=javascript}

RewriteCond %{HTTP_HOST} ^DOMAIN\.com$ [OR]
RewriteCond %{HTTP_HOST} ^www\.DOMAIN\.com$
RewriteRule ^([A-Za-z0-9-]+)$ index.php?page=$1 [QSA,L]

{/code}

.htaccess can get heady really quick. if you don’t already understand what’s going on here don’t feel bad, I am still baffled by it. I’ll give some explaination:

  • the RewriteCond sets us up after our domain name, (here www or not) (the ‘.’ are escaped with \
  • The RewriteRule is what’s applied if our conditions are met and has 2 parts
  • The first part (optionally) starts with ^ and ends with &. we use them here to limit the scope of our character match.
  • the characters we are matching are within []. here it will match any combination of A-Z, a-z, 0-9, and ‘-‘. the + means 1 or more occurance of any of these characters. So our url might be something like ‘/home-decor’.
  • The () outside this expression encapsulate it and save it, kind of like a variable.
  • in the second part, we say where that match will be directed to. here is probably the most common page include url, index.php?page=PAGE.
  • $1 is used to call the match we saved in ()
  • QSA means modify querystring, L means if executed, do not execute any further rules

At this point, shut javascript off again and try your new urls. Now instead of index.php?page=your-page, you can use simpler url’s like /your-page.

Server handling #! alias

If you can bring up some pages you’ve created this way you’re on track. Make sure it works because once you move to “_escaped_fragment_=”, you will not be able to test your urls on a client computer. Crawlers will properly interpret this alias, but your computer will not. If you’re sure that your non-#! urls work, you are ready to change your rewrites again.

{code type=javascript}

RewriteCond %{HTTP_HOST} ^DOMAIN\.com$ [OR]
RewriteCond %{HTTP_HOST} ^www\.DOMAIN\.com$
RewriteRule ^_escaped_fragment_=([A-Za-z0-9-]+)$ index.php?page=$1 [QSA,L]

{/code}

Also include this in your header:

{code type=HTML}

<meta name=”fragment” content=”!” />

{/code}

Now that the #! alias _escaped_fragment_= is in place, crawlers only will request your #!your-page urls from the server as _escaped_fragment_=your-page. If you turn off javascript in your browser at this point, your urls should not work.

Now that our URL structure is supported in script and bot environments, we can create a proper menu:

{code type=HTML}

<a href=”/#!your-page” onClick=”makeRequest(‘your-page’); return false;”>Make Request</a>

{/code}

Even better, lets create a jquery event handler that automatically retrieves the name of the page. erase the makeRequest() function and put an event handler (as always) somewhere after the tags it is bound to

{code type=javascript}

//after menu or end of body

$(“a.menu”).click(function() {

var thisPage = $(this).attr(‘href’); //get the href of this link to extract the page name

thisPage = thisPage.replace(“#!”,””); //get the name of the page and strip the hash

xmlhttp.open(“GET”,”AJAXpp.php?page=”+thisPage+”.php”,true); //Method, file and true for asyncronous. .php extension is included so not necessary in the query string, could extension could also be in the include instead

xmlhttp.send(); //send the request, send string used for post requests only

});

{/code}

and for each menu item:

{code type=HTML}

<a class=”menu” href=”/#!your-page” onClick=”return false;”>Make Request</a>

{/code}

Notice I still include onClick=”return false;”. I don’t doubt there is a better way to do this but I am not aware of it. return false stops the link from being followed if the event javascript is successful. I’m not sure if this is necessary since we are using #! urls.

With your finished menu, you should be able to click on each link and watch your page content change.

 

adding effects

As I said before, one should be very careful about page transition effects. The delay can test someone’s patience enough that they just leave your site. If you are going to use transitions effects, you should try to use your site from the perspective of both a new visitor and a returning visitor, one who is less likely to appreciate the astetic of the site. You have to consider how and for what reason people use your site. If it is information driven with a lot of returning users, it is a bad candidate for effects. If you still feel like your users can benefit from “the experience” of browsing your site, we continue to adding effects with jquery.

Note: I am using jquery’s animation here, however I am not using html5 canvas, which may be better suited for many jquery effects. If you plan to do a lot of animating, I suggest you learn html5 canvas asap.

Lets consider probably the most common content transition: fading. You click a link, the current content fades out and the new in. Basically in the progression of making and handling our ajax request, we add effects to  be fired at the appropriate time.

First we want the content to fade out when we click a link. Let’s go back to our onclick handler

{code type=javascript}

//after menu or end of body

$(“a.menu”).click(function() {

var thisPage = $(this).attr(‘href’); //get the href of this link to extract the page name

thisPage = thisPage.replace(“#!”,””); //get the name of the page and strip the hash

$(‘#content’).stop().fadeOut(1000,function() {

xmlhttp.open(“GET”,”AJAXpp.php?page=”+thisPage+”.php”,true); //Method, file and true for asyncronous. .php extension is included so not necessary in the query string, could extension could also be in the include instead

xmlhttp.send(); //send the request, send string used for post requests only

});

});

{/code}

you see here we put all of our request script within another function, at the end of fadeOut. Doing so means the request will not be executed until the fade out is complete. This is because it is important that our steps happen in order. “.stop()” is passed before the fade command, interrupting any current animation to avoid an effect backup. The first property in fadeOut is duration, setting how long the fade should last in milliseconds. Now when we click a link with the class “menu” our content should fade out.

The content will fade out, then the AJAX request will be made, so now we go back to our request ready handler to fadeIn

{code type=javascript}

xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState==4 && xmlhttp.status==200) {
$(‘#content-scroll’).html(xmlhttp.responseText)

.stop().fadeIn(1000);
}
}

{/code}

when the request is complete, use jquery to replace the html contents of the content element. again stop any current effects, and fade in. The effect or replacement will not occur without a successful response from the server.

Using delays for more timing control

Re Binding page-relevant event handlers (page effects will not work after the first page otherwise)

 

Saving the state to the browser using html5

Sending a hit to google analytics

Restore no-script user functionality

%d bloggers like this: