Showing posts with label google app engine. Show all posts
Showing posts with label google app engine. Show all posts

2013-02-19

Live Web Bicycle Dashboard using ControlMyPi


*** NOTE: ControlMyPi shutting down ***

This post shows how I set up a Live Web Dashboard from a Raspberry Pi as seen in the video above. In case you haven't worked it out, what's going on here is the Raspberry Pi is using 3G to send GPS and accelerometer data up to ControlMyPi. Users can then log in to ControlMyPi and watch the Live data displayed on the dashboard. In this case I'm using Google StreetView and Maps to show the current position and heading. Gauges are used to show speed and X,Y and Z accelerations.

(Special thanks to Rasathus for making this video for me!)

Click here to watch the "as live" replay of bicycle telemetry!


The diagram below shows the data flow:
There's a lot going on here so in this first post I'll explain how it all fits together and in the following post (or maybe posts) I'll go through the code.

3G dongle

There are quite a few guides out there for setting up 3G dongles on Raspberry Pis. Some use what now seems to be an unsupported script called sakis3g - I tried this first but was uneasy about using something that the author appears to have taken down. In fact all I had to do was install usb-modeswitch and wvdial. usb-modeswitch automatically detects your dongle and switches it into TTY mode. Simply use apt-get to install it.

wvdial is used to make the PPP connection. After using apt-get again to install it I followed the instructions on Linux Forums to get my connection up and running. 

Start-up sequence: There's probably a better Linuxy way of doing this (maybe someone can comment to let me know) but I needed to be in control of the order that things started up to help with TTY discovery and smooth networking. I'm using a TextStar serial LCD through a USB to serial converter and my code expects to find it on /dev/ttyUSB0 so I really don't want the 3G dongle to appear on USB0. Also, I found that if my code starts trying to use the network before the PPP link is up it doesn't work and you have to stop and start the program. This is no good if you're out on your bicycle somewhere. So I wrote a small boot up script which is run from /etc/rc.local it guides the user through this sequence:

  1. Remove 3G dongle and power up Raspberry Pi
  2. Raspberry Pi boots up and runs rc.local
  3. LCD shows "Insert dongle and press 'A'"
  4. User plugs in dongle and waits for Blue LED before pressing the 'A' button on the TextStar
  5. LCD shows "Starting 3G please wait"
  6. wvdial is started
  7. Few seconds delay to wait for the network to come good
  8. Start up Bicycle Telemetry app
This works nicely and means I can get the system up and running anywhere.

GPS

I'm using the Adafruit Ultimate GPS Breakout - 66 channel w/10 Hz updates - Version 3. I chose this one because having a 10Hz update fitted well with the code design. As you'll see when I publish the code, the main loop is timed off the updates from the GPS. Although I'm not updating the Web Dashboard as quickly as this I am logging this information to file 10 times a second.

The breakout board is simply connected up to the TTL serial pins on the Raspberry Pi and then it appears on /dev/ttyAMA0. When I wrote the code I didn't realise that gpsd existed so I have some code which decodes the serial stream directly, it's pretty simple though. Also I can't see how you can send the settings commands through to the GPS module from gpsd - I'm sending commands to switch the baud rate to 38400 and put it in 10Hz update mode (see PMTK_SET_NMEA_UPDATERATE in this pdf). Without these settings it defaults to 9600 baud and 1Hz updates.




3 Axis Accelerometer

Another Adafruit board is used here - the ADXL335 this senses up to 3g in X,Y and Z directions. I've then used an MCP3008 to convert the analog voltage outputs to 3 digital readings available over SPI. This uses the same technique (and code) from my article: Raspberry Pi hardware SPI analog inputs using the MCP3008.










ControlMyPi

I have created the cloud app ControlMyPi to make projects like this not only possible, but easy. A client library is used on the Raspberry Pi and all communication between it and ControlMyPi are over XMPP (also known as Jabber) protocol. This is the instant messaging protocol used by Google Talk as well. By using XMPP you're not hampered by firewalls and such but you do have near real-time messaging.

ControlMyPi also makes it simple to serve your live data to multiple web clients using "push". Updates from your Pi are routed through ControlMyPi and "pushed" to the web browser of anyone viewing the dashboard without refreshing pages.

More information about ControlMyPi is in my previous article: Control My Pi - Easy web remote control for your Raspberry Pi projects. Just as a teaser for the next post here is the panel definition code for the Bicycle Dashboard:

[
[ ['S','locked',''] ],
[ ['O'] ],
[ ['P','streetview',''],['P','map',''] ],
[ ['C'] ],
[ ['O'] ],
[ ['L','Speed'],['G','speed','mph',0,0,50], ['L','Height'],['S','height',''] ],
[ ['C'] ],
[ ['L','Accelerations'] ],
[ ['G','accx','X',0,-3,3], ['G','accy','Y',0,-3,3], ['G','accz','Z',1,-3,3] ],
[ ['L','Trace file'],['B','start_button','Start'],['B','stop_button','Stop'],['S','recording_state','-'] ]
]

Here I'm using Picture widgets (P) for 'streetview' and 'map'. ControlMyPi allows you to push an update to an image by sending a url from your script, so the Google Maps Image APIs work very well. I'll show exactly how this is done in code in the next post.

Since I can't be riding around 24x7 I have set up a script which plays back recorded data from a few journeys. This script is running on a Raspberry Pi and sending out the data to ControlMyPi at the correct speed so it's a pretty good simulation. I've set it up as a public panel so you can access it from the front page - or this link: Replay of bicycle telemetry.



The next post will guide you through the code for all of this, coming soon...


2010-10-28

Manipulating images in App Engine's Blobstore

In my previous post I showed how to take images uploaded to the Blobstore and resize them before storing them into the Datastore. This was used on PicSoup reliably for 6 months or so. I really wanted to find a solution that could use faster image serving services like Picasa but ran into terms of service issues.

Shortly after writing that blog entry App Engine SDK 1.3.6 was released along with the new fast image serving facilities. There are some good tutorials which explain this in detail so I won't repeat that here.

What I needed for Picsoup was to use the new image serving but to be able to manipulate the images in the Blobstore. This is so I can allow the user to upload images larger than 1MB but then resize them to 800x600 to store them. Typically a JPEG at this size is less than 100KB whereas the originals tend to be around 3MB. I also wanted to provide a rotation feature so the user can correct the image orientation after uploading.

The key to this problem is how to get an image out of the Blobstore, manipulate it, and put it back. The Blobstore API has no methods for writing directly to it, you can only write by uploading data through an HTTP POST and thus creating a new Blob. So the problem breaks down into three steps:
  1. Get the image from the Blobstore and manipulate it
  2. Upload the new image to the Blobstore
  3. Update Datastore references to the new Blob and remove the old Blob

Step 1: Get the image from the Blobstore and manipulate it

BlobKey bk = new BlobKey(ce.getBlobKey());
ImagesService imagesService = ImagesServiceFactory.getImagesService();
Image oldImage = ImagesServiceFactory.makeImageFromBlob(bk);
Transform rotate = ImagesServiceFactory.makeRotate(90);
Image image = imagesService.applyTransform(rotate, oldImage, ImagesService.OutputEncoding.JPEG);
sendToBlobStore(Long.toString(ce.getId()), "save", image.getImageData());
My domain object, a competition entry (ce), has a BlobKey string property. I use the ImageService to make an image from the blob and rotate it to create a new image.

Step 2: Upload the new image to the Blobstore

This is the step that I imagine the App Engine team will get around to adding to the API at some point. It would be nice to have a function to complement makeImageFromBlob(BlobKey) called makeBlobFromImage(Image). In the mean time I have written my own multipart/form-data post routine:
private static final boolean PRODUCTION_MODE = SystemProperty.environment.value() == SystemProperty.Environment.Value.Production;
    
private static final String URL_PREFIX = PRODUCTION_MODE ? "" : "http://127.0.0.1:8888";

private void sendToBlobStore(String id, String cmd, byte[] imageBytes) throws IOException {
    String urlStr = URL_PREFIX+BlobstoreServiceFactory.getBlobstoreService().createUploadUrl("/blobimage");
    URLFetchService urlFetch = URLFetchServiceFactory.getURLFetchService();
    HTTPRequest req = new HTTPRequest(new URL(urlStr), HTTPMethod.POST, FetchOptions.Builder.withDeadline(10.0));
    
    String boundary = makeBoundary();
    
    req.setHeader(new HTTPHeader("Content-Type","multipart/form-data; boundary=" + boundary));
    
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    
    write(baos, "--"+boundary+"\r\n");
    writeParameter(baos, "id", id);
    write(baos, "--"+boundary+"\r\n");
    writeImage(baos, cmd, imageBytes);
    write(baos, "--"+boundary+"--\r\n");

    req.setPayload(baos.toByteArray());
    try {
        urlFetch.fetch(req);
    } catch (IOException e) {
        // Need a better way of handling Timeout exceptions here - 10 second deadline
        logger.error("Possible timeout?",e);
    }        
}

private static Random random = new Random();    

private static String randomString() {
    return Long.toString(random.nextLong(), 36);
}

private String makeBoundary() {
    return "---------------------------" + randomString() + randomString() + randomString();
}        

private void write(OutputStream os, String s) throws IOException {
    os.write(s.getBytes());
}

private void writeParameter(OutputStream os, String name, String value) throws IOException {
    write(os, "Content-Disposition: form-data; name=\""+name+"\"\r\n\r\n"+value+"\r\n");
}

private void writeImage(OutputStream os, String name, byte[] bs) throws IOException {
    write(os, "Content-Disposition: form-data; name=\""+name+"\"; filename=\"image.jpg\"\r\n");
    write(os, "Content-Type: image/jpeg\r\n\r\n");
    os.write(bs);
    write(os, "\r\n");
}
The sendToBlobStore method takes three arguments:
  1. id - a domain object key id used to update the datastore reference to the new blob
  2. cmd - a command string used to determine how to handle the uploaded data
  3. imageBytes - a byte array of the new image that is to be uploaded
it then creates a multipart/form-data payload to send via the URLFetchService. The Deadline has been set to 10 seconds - the current maximum - but as you can see there is still a try..catch block around urlFetch.fetch(req) to catch timeouts. More about this later.

Step 3: Update Datastore references to the new Blob and remove the old Blob

The Blobstore calls back to "/blobimage" as defined earlier in sendToBlobStore when it has finished storing the new blob. So a doPost method is required to handle the incoming callback. When we have finished processing the callback we have to send a redirect and therefore we have to have a servlet request handler ready to respond as well. A possible quirk I've noticed here is that the browser follows the redirect via a GET whereas the URLFetchService follows it with another POST request, therefore the handler has to be available for both.
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
    resp.getWriter().write("SUCCESS");
}

public void doPost(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException {
    // Handle post requests
    String qcmd = req.getParameter("qcmd");        
    if ("success".equals(qcmd)) {
        res.getWriter().write("SUCCESS");
        return;
    }
    
    // Handle upload callbacks
    Map<String, BlobKey> blobs = blobstoreService.getUploadedBlobs(req);
    if (blobs.isEmpty()) {
        throw new ServletException("UploadedBlobs map is empty");
    }
    Entry<String, BlobKey> entry = blobs.entrySet().iterator().next();
    
    String handler = entry.getKey();
    BlobKey blobKey = entry.getValue();
    
    if ("upload".equals(handler)) {
        initialUploadHandler(res, blobKey);
    } else if ("save".equals(handler)) {
        saveHandler(req, blobKey);
        res.sendRedirect(SUCCESS_RESULT);
    } else {
        throw new ServletException("Invalid handler request ["+handler+"]");
    }
}
Here you can see that my handlers for the redirects just send the word SUCCESS. My GWT code reads this and then makes further RPCs to update the front-end. The section to explain here is under the "Handle upload callbacks" comment. What I'm doing here is simply taking the first entry from the UploadedBlobs map and using the key to determine how to process the callback. The key is the "cmd" parameter we passed in earlier to the sendToBlobStore method. I have removed a few handlers from this example for brevity but you can see here how I can have different processing for an initial upload from a browser versus an internal upload following a rotate transformation.

The rotate operation we ran in Step 1 passed in the cmd "save" meaning the saveHandler is called:
private void saveHandler(HttpServletRequest req, BlobKey blobKey) {
    Long compEntryId = new Long(req.getParameter("id"));
    logger.info("Incoming image to save: ["+blobKey.getKeyString()+"] id=["+compEntryId+"]");

    CompEntry ce = dao.getCompEntry(new Key<CompEntry>(CompEntry.class, compEntryId));
    if (ce != null) {
        String oldBlobKey = ce.getBlobKey();
        
        ce.setBlobKey(blobKey.getKeyString());
        ce.setServingUrl(getServingUrl(blobKey));
        ce.setResized(true);
        dao.ofy().put(ce);
        
        // Delete the old Blob
        if (oldBlobKey != null) {
            blobstoreService.delete(new BlobKey(oldBlobKey));
        }
    }
}

private String getServingUrl(BlobKey blobKey) {
    String servingUrl = ImagesServiceFactory.getImagesService().getServingUrl(blobKey);
    // Hack for Dev mode
    if (PRODUCTION_MODE) {
        return servingUrl;
    } else {
        return servingUrl.replaceFirst("http://0.0.0.0:8888", "");                    
    }
}
In saveHandler there is a little bit of Objectify code to update the datastore object to reference the new blob. The old blob is then deleted. Note my little hack in getServingUrl to iron out a difference between the Development and Production environments.

Timeouts

I arrived at the design above following a number of experiments. The main problem that shapes the solution this way is the URLFetchService timeout. The maximum deadline at the moment is 10 seconds which seems like plenty of time but an IOException for a timeout is regularly thrown. For some reason (any explanation gratefully received) when there is only 1 instance of the app running in production the deadline is always reached. As soon as there are 2 or more instances running this stops happening. Unfortunately the exception thrown is just an IOException and not something more specific like URLFetchDeadlineExceededException which would be much nicer. On the development server this timeout is never reached.

To get around this timeout issue you just have to make sure that any critical code goes into the Blobstore callback handler. For example, I save the change to the domain object in saveHandler and not in my original call in Step 1. In my GWT front-end I have routines to check that the transformation is complete and show a spinner while waiting.

Picsoup is now using this code, go and check it out!

2010-07-31

Google App Engine Image Storage?

My App Engine project: PicSoup, is a weekly photo voting competition with a twist (read the Help tab at the site for instructions). The app is all about images. Users upload (or email in) their competition entry which is then resized and displayed on the site for everyone to vote for.

The Blobstore is used for the initial upload to get around the 1MB limit. Sadly this is not available for emails so the user is limited to 1MB for email entry. The large image is then resized and stored as a blob in the Datastore like this:

Map<String, BlobKey> blobs = blobstoreService.getUploadedBlobs(req);
BlobKey blobKey = blobs.get("upload");

// Transform the image to a small one and save it in the datastore
ImagesService imagesService = ImagesServiceFactory.getImagesService();

Image oldImage = ImagesServiceFactory.makeImageFromBlob(blobKey);

Transform resize = ImagesServiceFactory.makeResize(250, 220);

Image newImage = imagesService.applyTransform(resize, oldImage, ImagesService.OutputEncoding.JPEG);
   
Pic pic = new Pic();
pic.setImage(new Blob(newImage.getImageData()));           
dao.putPic(pic);

When it's time to serve an image the servlet code is pretty simple.

public void doGet(HttpServletRequest req, HttpServletResponse res) throws IOException {
    String picId = req.getParameter("id");
    if (picId != null) {
        Pic p = dao.getPic(Long.parseLong(picId));
        if (p != null) {
            res.setContentType("image/jpeg");
            res.setHeader("Cache-Control", "max-age=1209600"); //Cache for two weeks
            res.getOutputStream().write(p.getImage().getBytes());                
        }
    }
}

Note that I'm using the Cache-Control header with a max-age of two weeks. The image with the given id never changes so in theory this could be set to forever. This caching is very important because otherwise the app gets hit every time for the image. Users of PicSoup frequently visit the site to check for new entries .

The downside to this is that sometimes the Datastore can be very slow. I've watched images appear like they used to on an old 56k modem! Google were having some problems with Datastore performance and it is way better now but it's not as fast as accessing a static file on a dedicated server.

The performance and the Datastore usage quota put me off keeping a higher resolution image but the site really needed it. So I started developing a way to store the big images in Picasa. The documentation is really good and I soon had this working. Now when the user uploaded their image the small image would still be stored in the Datastore as above but then a task would be enqueued on the Task Queue to transform the image from the Blobstore again and then upload it to my Picasa account:

private void addToPicasa(String blobKey, String id) {
    ImagesService imagesService = ImagesServiceFactory.getImagesService();
    BlobKey bk = new BlobKey(blobKey);
    Image oldImage = ImagesServiceFactory.makeImageFromBlob(bk);
    Transform resize = ImagesServiceFactory.makeResize(1024, 768);
    Image image = imagesService.applyTransform(resize, oldImage, ImagesService.OutputEncoding.JPEG);
    logger.info("Big image bytes: "+image.getImageData().length);
            
    PicasawebService myService = new PicasawebService("PicSoup");
    try {
        myService.setUserCredentials("mypicasaaccount@gmail.com", "my.password");
        String albumid = "5495486978942507441";
        if (SystemProperty.environment.value() == SystemProperty.Environment.Value.Production) {
            albumid = "5495487073314754801";
        }
        URL albumPostUrl = new URL("http://picasaweb.google.com/data/feed/api/user/mypicasaaccount/albumid/"+albumid);

        PhotoEntry myPhoto = new PhotoEntry();
        myPhoto.setTitle(new PlainTextConstruct(id));

        MediaSource myMedia = new MediaByteArraySource(image.getImageData(), "image/jpeg");
        myPhoto.setMediaSource(myMedia);

        PhotoEntry returnedPhoto = myService.insert(albumPostUrl, myPhoto);            
        MediaContent mc = (MediaContent) returnedPhoto.getContent();

        CompEntry ce = dao.ofy().get(CompEntry.class, Long.parseLong(id));
        ce.setPicUrl(mc.getUri());
        ce.setPhotoId(returnedPhoto.getGphotoId());
        
        dao.ofy().put(ce);
    } catch (Exception e) {
        logger.error(e);
    }
    
    // Delete the hi-res
    blobstoreService.delete(bk);        
}

Before I started the UI work to display the image from Picasa I checked the Terms of Service and realised this solution may be contrary to item 5.9:
5.9 In order to use the Picasa Web Albums API with your service, all End Users on your service must have previously created their own individual Picasa Web Albums accounts. You must explicitly notify End Users that they are accessing their Picasa Web Albums accounts through your service. In other words, you may not create one or more Picasa Web Albums accounts for the purpose of storing images on behalf of users without those users creating their own individual Picasa Web Albums accounts.
Now, strictly, I'm not sure I'm storing the images on behalf of the users - they've kind of donated them to me and my app. I searched around for some clarification and found that there are plenty of people trying to do this sort of thing and the answer is always no. Have a look at this search in the forum.

So, I've removed Picasa from my app and I'm now using the Datastore to hold an 800x600 image as well. (If you go to PicSoup today [31-July-2010] only the most recent entries have the high-res view available, just click the small image). Now that the Datastore performance has improved this is not so bad.

I've looked at Flickr and Photobucket as well and they also seem to have a clause like this in their terms.

Does anyone know of a service where this is allowed?

UPDATE See my new post which explains how to use the new Blobstore based fast image serving service.

2010-05-16

App Engine email to 'admins' gotcha

I recently started adding email notifications to PicSoup (my GAEJ application). I followed the examples in the documentation and wrote a really simple function to allow me to send a simple text email to one recipient:
private void sendEmail(String recipientAddress, String recipientName, String subject, String message) {
    Properties props = new Properties();
    Session session = Session.getDefaultInstance(props, null);
    try {
        Message msg = new MimeMessage(session);
        msg.setFrom(new InternetAddress("my-admin-account@gmail.com", "PicSoup Admin"));
        msg.addRecipient(Message.RecipientType.TO, new InternetAddress(recipientAddress, recipientName));
        msg.setSubject(subject);
        msg.setText(message);
        Transport.send(msg);
        logger.info("Sent email to "+recipientAddress);
    } catch (Exception e) {
        logger.error("Failed to send email",e);
    }
}
Simple stuff, and almost identical to the documented example. Note that it uses the InternetAddress constructor that allows you to enter an address and a personal name.

This works really well and allows me to write simple calls like this:
sendEmail(p.getUser().getEmail(), p.getDisplayName(), "PicSoup info", "Your competition entry was successfully added. Good luck!");

This notifies a particular user and uses their settings to get the appropriate "personal name". I also wanted to use this call to send notifications to myself. Being the administrator I could do this using the Admins Emailed quota. To do this I thought I could use the special "admins" recipient with my sendEmail function like this:
sendEmail("admins", "PicSoup Administrators", "PicSoup info", "A new pic has been added.");

Sadly I discovered that this doesn't work. It silently fails to send the email to anyone! It turns out that this is because I have included "PicSoup Administrators" as the "personal name" in the InternetAddress object. In order to make this work I changed my sendEmail method to ignore the "personal name" for emails to admins:
private void sendEmail(String recipientAddress, String recipientName, String subject, String message) {
    Properties props = new Properties();
    Session session = Session.getDefaultInstance(props, null);
    try {
        Message msg = new MimeMessage(session);
        msg.setFrom(new InternetAddress("my-admin-account@gmail.com", "PicSoup Admin"));
        if ("admins".equals(recipientAddress)) {
            msg.addRecipient(Message.RecipientType.TO, new InternetAddress(recipientAddress));
        } else {
            msg.addRecipient(Message.RecipientType.TO, new InternetAddress(recipientAddress, recipientName));             
        }
        msg.setSubject(subject);
        msg.setText(message);
        Transport.send(msg);
        logger.info("Sent email to "+recipientAddress);
    } catch (Exception e) {
        logger.error("Failed to send email",e);
    }
}

2010-02-27

Simple Google Gadget on GAE

I've just found how easy it is to create Google Gadgets and host them on App Engine. All you need to do is create a simple page in your App Engine application to be the content of the gadget and then host a URL content type gadget specification that points to that page.

Your Gadget specification will look something like this:
<?xml version="1.0" encoding="UTF-8"?>
<Module> 
      <ModulePrefs 
            height="100" 
            title="My Simple Gadget" 
            description="Test gadget" 
            author="Anonymous" 
            author_email="anonymous+gg@gmail.com"/> 
      <Content 
            type="url" 
            href="http://yourappid.appspot.com/gadget.html" /> 
</Module>

In my case I'm using a very simple GWT application. Make sure that it looks good at 300 pixels wide. The height can vary but don't make it too tall because people are less likely to install it if it takes up too much space.

The URL content type approach allows you to make a normal web page using any tools you like and you can debug it like any normal page.

To put it all together simply save your gadget spec into the war directory of your GAE application and deploy it so it's available on a URL like this: http://yourappid.appspot.com/gadgetspec.xml

Now you can test it in the iGoogle Sandbox which (after you've signed up) allows you to test unpublished gadgets. Add this developer gadget to make life easy.

When you're happy with it submit your gadget and you're done.

2009-12-05

GAE 1.2.8 fixes Mail but not JAXB

Since 1.2.8 my work-around for decoding incoming Mail correctly is no longer required. In fact, in 1.2.8, that code now breaks! My example, which only needs a text/plain message or the first part of a multipart message, now looks like this:
protected ModelAndView handleRequestInternal(HttpServletRequest request, HttpServletResponse response) throws Exception {
        Properties props = new Properties(); 
        Session session = Session.getDefaultInstance(props, null); 
        MimeMessage message = new MimeMessage(session, request.getInputStream());

        Address[] from = message.getFrom();
        String fromAddress = "";
        if (from.length > 0) {
            fromAddress = from[0].toString();
        }
        
        Object content = message.getContent();
        String contentText = "";
        if (content instanceof String) {
            contentText = (String)content;
        } else if (content instanceof Multipart) {
            Multipart multipart = (Multipart)content;
            Part part = multipart.getBodyPart(0);
            Object partContent = part.getContent();
            if (partContent instanceof String) {
                contentText = (String)partContent;
            }
        }
        logger.info("Received email from=["+fromAddress+"] subject=["+message.getSubject()+"]");
        logger.info("Content: "+contentText);
        return null;
 }

The key to this fix is that the calls to getContent() now return the correct object type. So in my case I'm only interested in text/plain and therefore just Strings.

Sadly JAXB still doesn't work so I'll continue to use Simple XML and wait for 1.2.9.

UPDATE: JAXB support was fixed on December 10th without updating the version number.

2009-11-21

Receiving email in Google App Engine + Spring

Here's my Spring Controller which, at the moment, receives an email and prints various elements of the message to the log. The instructions here are very good and I only adapted it slightly for Spring.

Firstly I added the inbound-services section to my appengine-web.xml file. Next I added a url mapping to my Spring config:

<bean id="publicUrlMapping" 
  class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping">
    <property name="mappings">
        <props>
            <prop key="/_ah/mail/*">mailController</prop>
        </props>
    </property>
</bean>

Here is the MailController class:

import java.io.InputStream;
import java.io.UnsupportedEncodingException;
import java.util.Properties;

import javax.activation.DataHandler;
import javax.activation.DataSource;
import javax.mail.Address;
import javax.mail.Part;
import javax.mail.Session;
import javax.mail.internet.MimeMessage;
import javax.mail.internet.MimeMultipart;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.apache.commons.io.IOUtils;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.web.servlet.ModelAndView;
import org.springframework.web.servlet.mvc.AbstractController;

public class MailController extends AbstractController {
    protected final Log logger = LogFactory.getLog(getClass());

    @Override
    protected ModelAndView handleRequestInternal(HttpServletRequest request, HttpServletResponse response) throws Exception {
        Properties props = new Properties(); 
        Session session = Session.getDefaultInstance(props, null); 
        MimeMessage message = new MimeMessage(session, request.getInputStream());

        Address[] from = message.getFrom();
        String fromAddress = "";
        if (from.length > 0) {
            fromAddress = from[0].toString();
        }

        String contentType = message.getContentType();
        InputStream inContent = null;
        logger.info("Message ContentType: ["+contentType+"]");

        if (contentType.indexOf("multipart") > -1) {
            //Need to get the first part only
            DataHandler dataHandler = message.getDataHandler();
            DataSource dataSource = dataHandler.getDataSource();
            MimeMultipart mimeMultipart = new MimeMultipart(dataSource);
            Part part = mimeMultipart.getBodyPart(0);            
            contentType = part.getContentType();
            logger.info("Part ContentType: ["+contentType+"]");
            inContent = (InputStream)part.getContent();
        } else {
            //Assume text/plain
            inContent = (InputStream)message.getContent();
        }
        
        String encoding = "";
        if (contentType.indexOf("charset=") > -1) {
            encoding = contentType.split("charset=")[1];
        }
        logger.info("Encoding: ["+encoding+"]");
        
        String content;
        try {
            content = IOUtils.toString(inContent, encoding.toUpperCase());
        } catch (UnsupportedEncodingException e) {
            content = IOUtils.toString(inContent);
        }
        
        logger.info("Received email from=["+fromAddress+"] subject=["+message.getSubject()+"]");
        logger.info("Content: "+content);
        
        return null;
    }
}

Note that on the Google page it says
The getContent() method returns an object that implements the Multipart interface. You can then call getCount() to determine the number of parts and getBodyPart(int index) to return a particular body part.
This doesn't appear to be quite true. If you call getContent() on MimeMessage it returns a ByteArrayInputStream of the raw bytes for the whole message. According to the documentation here an input stream should only be returned if the content type is unknown. I think this is a bug.

You can get around this by parsing the content type yourself as I have done in my example. If message.getContentType() returns a string containing "multipart" then I parse it as a multipart message, otherwise I assume it is "text/plain".

In order to extract a single part of the Multipart content you have to pass through a MimeMultipart object. It's here that you can call getCount() and extract the Parts that you want. In my example I just get the first part.

Calling getContent() on the Part still returns a stream of bytes so you have to convert it with the correct encoding. You can extract the encoding from the ContentType of the Part. I added a try..catch block around the conversion to a string in case the encoding was not recognized - in which case it falls back to the default.

It is vital that you determine whether you have multipart content or not. If you try to parse a "text/plain" message as a multipart then you may well encounter an error like this:
Nested in org.springframework.web.util.NestedServletException: Handler processing failed; nested exception is java.lang.OutOfMemoryError: Java heap space:
java.lang.OutOfMemoryError: Java heap space
 at java.util.Arrays.copyOf(Unknown Source)
 at java.io.ByteArrayOutputStream.write(Unknown Source)
 at javax.mail.internet.MimeMultipart.readTillFirstBoundary(MimeMultipart.java:244)
 at javax.mail.internet.MimeMultipart.parse(MimeMultipart.java:181)
 at javax.mail.internet.MimeMultipart.getBodyPart(MimeMultipart.java:114)
The readTillFirstBoundary method fails because in a "text/plain" message there are no boundaries!

Note that the development server always sends a multipart message with two parts: text/plain and text/html. GMail also sends emails in this format but lots of other servers just send text/plain.

2009-11-15

Converting from JAXB to Simple XML

Since JAXB doesn't work on Google App Engine but Simple XML does I have been converting my application. I'm using XML in quite a loose way; I have only annotated my classes for the elements and attributes which I want to extract from the larger XML document. JAXB is more forgiving when used in this way, however with a few extra parameters the same can be achieved with Simple XML. Here is a table of a few things I had to convert:

Description JAXB Simple XML
The RSS element that is being deserialized has a version attribute on it but I do not wish to model this in the Java class. I only want to deserialize the channel element
@XmlRootElement(name="rss")
static class Rss {    
 @XmlElement(name="channel")
 Channel channel;
}

@Root(name="rss", strict=false)
static class Rss {    
 @Element(name="channel")
 Channel channel;
}
The channel element contains a list of item elements without any wrapper element enclosing the whole list.
static class Channel {
 @XmlElement(name="item")
 List<Item> items;
}

@Root(strict=false)
static class Channel {
 @ElementList(entry="item", inline=true)
 List<Item> items;
}
A List with a wrapper element.
@XmlElementWrapper(name="customfieldvalues")
@XmlElement(name="customfieldvalue")
List<String> customfieldvalues;

@ElementList(entry="customfieldvalue", 
  required=false)
List<String> customfieldvalues;
Deserialize
JAXBContext jaxbContext = 
  JAXBContext.newInstance(Rss.class);
Unmarshaller unmarshaller = 
  jaxbContext.createUnmarshaller();
Rss rss = 
  (Rss) unmarshaller.unmarshal(in);

Serializer serializer = new Persister();
Rss rss = 
  serializer.read(Rss.class, in);

2009-11-08

Using Simple XML instead of JAXB on Google App Engine

In my previous post I said that I was going to try using a hack to get around the lack of support for JAXB in Google App Engine. Not only did this feel bad but also it didn't work for me, despite all the changes that had been made to circumvent the non-whitelisted classes. I still got various java.lang.NoClassDefFoundError exceptions.

So I decided to try Simple XML Serialization instead. This worked really well and caused no problems in the Local environment. However, when I deployed this to Google I was hit by the Sandbox limitations for Reflection. When Simple scans your classes to build its "schema" it calls setAccessible(true) on every method and constructor it finds all the way up the hierarchy to Object. This violates the sandbox restriction: "An application cannot reflect against any other classes not belonging to itself, and it can not use the setAccessible() method to circumvent these restrictions." App Engine throws a SecurityException when you try to call setAccessible(true) on one of its classes.

For my purposes, and probably the majority case, I do not need to serialize or deserialize to any non-public method of any superclasses other than my own. So given this I decided to absorb any SecurityExceptions thrown during the scanning process thus leaving those methods out of the "schema". Two minor changes are required to the source. The scan methods in org.simpleframework.xml.core.ClassScanner and org.simpleframework.xml.core.ConstructrorScanner both need a try..catch block added like so:
//ClassScanner
   private void scan(Class real, Class type) throws Exception {
      Method[] method = type.getDeclaredMethods();

      for(int i = 0; i < method.length; i++) {
         Method next = method[i];
         try { 
          if(!next.isAccessible()) {
             next.setAccessible(true);
          }
          scan(next);
        } catch (SecurityException e) {
   // Absorb this
        }

      }     
   }

//ConstructorScanner
   private void scan(Class type) throws Exception {
      Constructor[] array = type.getDeclaredConstructors();
      
      for(Constructor factory: array){
         ClassMap map = new ClassMap(type);
         
         try {
          if(!factory.isAccessible()) {
             factory.setAccessible(true);
          }
          scan(factory, map);
         } catch (SecurityException e) {
   // Absorb this
         }
          
      } 
   }
This works well now in the Local and Deployed environments. Perhaps Simple would benefit from a mode or an option on the @Root annotation to specify the depth of class scanning as an alternative to this work around. I will post this on the Simple mailing list and report back.

UPDATE:
The author of Simple has responded to my post on the mailing list saying that my fix above will be added to the next release.

2009-10-31

Vote for JAXB on Google App Engine

JAXB is not yet supported on Google App Engine. If you try to use it you'll get this problem:



Go to this page http://code.google.com/p/googleappengine/issues/detail?id=1267 and vote for JAXB!

I'm going to try the interim solution provided by a poster to the issue above.

2009-10-26

JSTL on Google App Engine

This caught me out this morning. I was trying to get a simple JSP page with EL to work in Google App Engine but the EL wasn't being evaluated. This was what I had:

<%@ page contentType="text/html" pageEncoding="UTF-8" %>
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>

<!DOCTYPE html>
<html>
 <head>
  <title>Maintenance</title>
 </head>
<body>
 <p>Hello from Spring ${user}</p>
 <p>AuthDomain = ${user.authDomain}</p>
 <p>Nickname = ${user.nickname}</p>
 <p>Email = ${user.email}</p>
 <p>UserId = ${user.userId}</p>
</body>
</html>

Unfortunately the output from this was:

Hello from Spring ${user}
AuthDomain = ${user.authDomain}
Nickname = ${user.nickname}
Email = ${user.email}
UserId = ${user.userId}

In order to get Google App Engine to evaluate the EL I had to add this:

<%@ page isELIgnored="false" %>

to the top of the JSP.