Wednesday, December 30, 2009

Fun with the iGala picture frame



NOTE: This is an old article that I never got around to finishing. So it might or might not still be relevant. Turns out iGala is about to launch Android support for the frame, but until then, I suspect the following could still be of interest to some.

One of the most geeky of shopping sites must be thinkgeek.com. They have many wonderful and useless items to appeal to the geek inside of you, hell even my wife had a blast reading their catalog. Anyway, one of the items they sell (exclusively apparently) is the iGala digital picture frame, from Aequitas Technologies. Now I've come across countless of these, but the iGala is one of the few ones running an easy-to-access embedded Linux. In the following I will show what I mean and provide some documentation that I have collected.


Probing the frame from outside
It's an 8" LCD, capable of a 800x600 resolution and with a resistive touch membrane in front. Apart from that, not much info is given to the consumer - in spite of this frame being marked as "Linux inside". You can find the device on your local LAN, by doing an IP broadcast or using nmap. Here's how I found mine:

casper@workstation:~$ nmap -sP 192.168.0.1-254
Starting Nmap 5.00 ( http://nmap.org ) at 2009-12-30 02:06 CET
Host 192.168.0.1 is up (0.0018s latency).
Host 192.168.0.10 is up (0.086s latency).
Host 192.168.0.101 is up (0.00015s latency).
Host 192.168.0.105 is up (0.0025s latency).
Nmap done: 254 IP addresses (4 hosts up) scanned in 12.88 seconds



I happen to know that the 192.168.0.1 router, the 192.168.0.10 is a wifi web-cam, 192.168.0.101 is my own computer so that leaves only 192.168.0.105 as the frame. Once we located it on the LAN, let's probe it a bit with the nmap tool.

casper@workstation:~$ nmap 192.168.0.105 -p1-65535
Starting Nmap 5.00 ( http://nmap.org ) at 2009-12-30 02:11 CET
Interesting ports on 192.168.0.105:
Not shown: 65532 closed ports
PORT      STATE SERVICE
21/tcp    open  ftp
514/tcp   open  shell
65534/tcp open  unknown



Ok, it appears to be running an FTP server on port 21, an (unencrypted) telnet service at port 514 as well something unknown on port 65534. Let's check out the FTP server real quick.

casper@workstation:~$ telnet 192.168.0.105 21
Trying 192.168.0.105...
Connected to 192.168.0.105.
Escape character is '^]'.
220 localhost.localdomain FTP server (GNU inetutils 1.4.1) ready.
USER anonymous
530 User anonymous unknown.
USER igala
530 p
...



Ok so no anonymous account. We'd have to use exhaustive trial and error to get in here so let's move on to the next.

casper@workstation:~$ rsh -l igala 192.168.0.105 514
Login incorrect.
casper@workstation:~$ rsh -l root 192.168.0.105 514
514: not found
...



Not much better. Again, we'd have to start some brute-force attack against the device to get in. Off to the 3'rd one.

casper@workstation:~$ telnet 192.168.0.105 65534
Trying 192.168.0.105...
Connected to 192.168.0.105.
Escape character is '^]'.

BusyBox v1.4.1 (2008-12-29 10:57:28 CST) Built-in shell (msh)
Enter 'help' for a list of built-in commands.

root:/>



Whoa, bingo! Someone left a BusyBox telnet service listening on one of the very last possible ports.


Probing the frame from inside
Let's fire off some commands to see what we're dealing with here.

root:/> uname -a    
Linux blackfin 2.6.22.16-ADI-2008R1-svn #2477 Wed Dec 31 13:02:59 CST 2008 blackfin unknown



Interesting, Linux kernel 2.6.22.16 for the Blackfin processor.

root:/> cat /proc/cpuinfo
processor : 0
vendor_id : Analog Devices
cpu family : 0x27a5000
model name : ADSP-BF531 540(MHz CCLK) 108(MHz SCLK)
stepping : 5
cpu MHz  : 540.000/108.000
bogomips : 1073.15
Calibration : 536576000 loops
cache size : 16 KB(L1 icache) 16 KB(L1 dcache-wt) 0 KB(L2 cache)
dbank-A/B : cache/sram
icache setup : 4 Sub-banks/4 Ways, 32 Lines/Way
dcache setup : 1 Super-banks/4 Sub-banks/2 Ways, 64 Lines/Way
No Ways are locked
board name : ADDS-BF533-STAMP
board memory : 65536 kB (0x00000000 -> 0x04000000)
kernel memory : 63476 kB (0x00002000 -> 0x03dff000)



A fairly fast CPU - substantially faster than the 108MHz ARM inside my Samsung SPF-105V frame.

root:/> mount
rootfs on / type rootfs (rw)
proc on /proc type proc (rw)
ramfs on /var type ramfs (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw)
usbfs on /proc/bus/usb type usbfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
securityfs on /sys/kernel/security type securityfs (rw)
/dev/mtdblock1 on /mnt/flash type yaffs (rw)
/dev/mtdblock2 on /mnt/fdisk type yaffs (rw)



A good deal of various filesystems, of which /dev/mtdblock1 and /dev/mtdblock2 are probably the most interesting in regard to modding.

root:/> netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 192.168.0.105:50350     www.flickr.vip.mud:http TIME_WAIT   
tcp        0      0 192.168.0.105:50348     www.flickr.vip.mud:http TIME_WAIT   
tcp        0      0 192.168.0.105:50351     www.flickr.vip.mud:http TIME_WAIT   
tcp        0      0 192.168.0.105:50353     www.flickr.vip.mud:http TIME_WAIT   
tcp        0      0 192.168.0.105:50352     www.flickr.vip.mud:http TIME_WAIT   
tcp        0      0 192.168.0.105:50345     www.flickr.vip.mud:http TIME_WAIT   
tcp        0      0 192.168.0.105:50347     www.flickr.vip.mud:http TIME_WAIT   
tcp        0      0 192.168.0.105:50346     www.flickr.vip.mud:http TIME_WAIT   
tcp        0    164 192.168.0.105:telnet    192.168.0.101:57949     ESTABLISHED 
tcp        0      0 192.168.0.105:50349     www.flickr.vip.mud:http TIME_WAIT   



A busy device indeed. For some reasons, the frame have multiple idle connections to flickr.

root:/> dmesg
Hardware Trace:
0 Target : <0x000059f4> { _trap_c + 0x0 }
Source : <0xffa0860c> { _exception_to_level5 + 0xb4 }
1 Target : <0xffa08558> { _exception_to_level5 + 0x0 }
Source : <0xffa084b0> { _ex_trap_c + 0x5c }
2 Target : <0xffa08454> { _ex_trap_c + 0x0 }
Source : <0xffa086ac> { _trap + 0x28 }
3 Target : <0xffa08684> { _trap + 0x0 }
Source : <0x0061ef62> [ /lib/libuClibc-0.9.29.so + 0x1ef62 ]
4 Target : <0x0061ef46> [ /lib/libuClibc-0.9.29.so + 0x1ef46 ]
Source : <0x0061ef3e> [ /lib/libuClibc-0.9.29.so + 0x1ef3e ]
5 Target : <0x0061ef30> [ /lib/libuClibc-0.9.29.so + 0x1ef30 ]
Source : <0x036257f8> [ /mnt/flash/abies/lua + 0x257f8 ]
6 Target : <0x036257f0> [ /mnt/flash/abies/lua + 0x257f0 ]
Source : <0x0362f49e> [ /mnt/flash/abies/lua + 0x2f49e ]
7 Target : <0x0362f488> [ /mnt/flash/abies/lua + 0x2f488 ]
Source : <0x0366a3a8> [ /mnt/flash/abies/lua + 0x6a3a8 ]
8 Target : <0x0366a39c> [ /mnt/flash/abies/lua + 0x6a39c ]
Source : <0x0362f484> [ /mnt/flash/abies/lua + 0x2f484 ]
9 Target : <0x0362f45c> [ /mnt/flash/abies/lua + 0x2f45c ]
Source : <0x0362f456> [ /mnt/flash/abies/lua + 0x2f456 ]
10 Target : <0x0362f450> [ /mnt/flash/abies/lua + 0x2f450 ]
Source : <0x0362f444> [ /mnt/flash/abies/lua + 0x2f444 ]
11 Target : <0x0362f42e> [ /mnt/flash/abies/lua + 0x2f42e ]
Source : <0x0362f420> [ /mnt/flash/abies/lua + 0x2f420 ]
12 Target : <0x0362f3a4> [ /mnt/flash/abies/lua + 0x2f3a4 ]
Source : <0x0362f39a> [ /mnt/flash/abies/lua + 0x2f39a ]
13 Target : <0x0362f394> [ /mnt/flash/abies/lua + 0x2f394 ]
Source : <0x0362f382> [ /mnt/flash/abies/lua + 0x2f382 ]
14 Target : <0x0362f32e> [ /mnt/flash/abies/lua + 0x2f32e ]
Source : <0x0362f2e4> [ /mnt/flash/abies/lua + 0x2f2e4 ]
15 Target : <0x0362f2d4> [ /mnt/flash/abies/lua + 0x2f2d4 ]
Source : <0x0366a3fa> [ /mnt/flash/abies/lua + 0x6a3fa ]



Looks like we have the omnipresent libC library servicing Lua code located in /mnt/flash/abies/

root:/> top
Print certain system information.  With no OPTION, same as -s.
Mem: 45392K used, 15304K free, 0K shrd, 0K buff, 5600K cached
Load average: 2.69 2.42 2.22
PID USER     STATUS   VSZ  PPID %CPU %MEM COMMAND
438 root     R      27536     1 71.7 45.0 lua
21151 root     R        844 13454  7.2  1.3 top
495 root     S      27536   460  3.2 45.0 lua
343 root     S       2806     1  0.8  4.5 nano-X
13453 root     S        508   276  0.8  0.8 telnetd
349 root     SW         0     2  0.8  0.0 rt73
460 root     S      27536   438  0.0 45.0 lua
359 root     S        836     1  0.0  1.3 syslogd
360 root     S        836     1  0.0  1.3 klogd
194 root     S <      800     1  0.0  1.3 udevd
1 root     S        568     0  0.0  0.9 init
547 root     S        516     1  0.0  0.8 dhcpcd
277 root     S        484     1  0.0  0.7 udisk_mount

Some Lua processes, a nano-X server, a few telnet deamons, a DHCP client deamon and a udisk mount service. So anyway, since today is not the day for me to learn Lua, I will proceed playing with just shell scripting. Let's mod the frame to download some custom telemetry data or public weather satellite picture once a minute.
rm /mnt/flash/abies/default/*.*
cp /mnt/flash/wait.png /mnt/flash/abies/default/chart.png
sleep 30
while [ 1 ]
do
rm /mnt/flash/abies/default/chart.png
wget -O /mnt/flash/abies/default/chart.png http://www.ntua.gr/weather/sat.jpg
sleep 60
done

The above script takes advantage of the fact that the frame by default cycles between 4 preinstalled images located under /mnt/flash/abies/default/. So first it clears this picture index, then it copies a waiting image into the picture area, which makes the frame show a nice image while we wait for the actual live data. Then we loop indefinitely, and in this loop the old image is deleted, a new image is downloaded and then we sleep for 60 seconds.

The above image shows live telemetric data from my house, in the form of temperature, humidity etc. as 24h charts. In principle there's no limit to what you can render on the frame, as long as you are capable of producing 800x600 image representations.

Turns out, it's even possible to use the official update mechanism and create an update package which does this modding on a frame just by inserting a USB stick with the software on - just be aware of updates being pulled down from iGala's website which may conflict (you can block this of course, but then you might as well simple write your own Lua hosting application).

Wednesday, December 23, 2009

Android debug bridge on ubuntu 9.10

It turns out the latest Ubuntu release makes it a bit more difficult to connect the multi purpose Android debug bridge to your Android device. The following is really just an update of my older entry, to remind myself of the changes in Ubuntu 9.10. This stuff is pieced together from various other sources online (mostly android-discuss) and thus not particularly original.

Ok so first of all, there are quite a few more vendors and devices now (yay). In order to identify your device, make sure your device is not connected and then execute the Linux command lsusb.


casper@workstation:~$ lsusb
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 004: ID 07cc:0501 Carry Computer Eng., Co., Ltd Mass Storage
Bus 001 Device 002: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 002: ID 046d:c51a Logitech, Inc. MX Revolution/G7 Cordless Mouse
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub


Now connect the device and make sure USB debugging is enabled, and run the command again. You should have one more line:


casper@workstation:~$ lsusb
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 016: ID 0bb4:0c02 High Tech Computer Corp.
Bus 001 Device 004: ID 07cc:0501 Carry Computer Eng., Co., Ltd Mass Storage
Bus 001 Device 002: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 002: ID 046d:c51a Logitech, Inc. MX Revolution/G7 Cordless Mouse
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub



So now we learned that HTC stands for High Tech Computer. :) We then need to install the rule for the device, by creating a file under /etc/udev/


gksudo gedit /etc/udev/rules.d/51.android.rules


Enter the following, but adjust vendor id and product ID according to what you just found out in the previous step. The device entry below will likely only work if you also happen to have an HTC Magic/Saphire A6161/Nordic (32a).


SUBSYSTEM=="usb", ATTRS{idVendor}=="0bb4", ATTRS{idProduct}=="0c01", MODE="0666"


Make sure everyone can access and execute it:


sudo chmod a+rx /etc/udev/rules.d/51.android.rules


Now reload the udev service:


sudo service udev reload


Now you are ready to start the adb service. Note that due to a change/bug in the security policies of Ubuntu 9.10, you won't have access to the device unless you execute adb (or rather, the adb service) as a super user. This is mildly annoying:


casper@workstation:~/development/android-sdk-linux/tools$ adb devices
* daemon not running. starting it now *
* daemon started successfully *
List of devices attached
???????????? no permissions


Instead what you will have to do is start the server manually in super user mode (make sure to kill any existing server first, with sudo adb kill-server):


casper@workstation:~/development/android-sdk-linux/tools$ sudo ./adb start-server
* daemon not running. starting it now *
* daemon started successfully *


And now you may use adb as per usual:


casper@workstation:~/development/android-sdk-linux/tools$ adb devices
List of devices attached
HT95PKF00221 device


Not really groundbreaking stuff, but I thought it might be useful and save time for others running Ubuntu 9.10 and wanting to get ADB up and running.


Update
The Nexus One, Google's latest Android beast, appears to be identified as 18d1:4e11 but without any vendor string. Interesting since the Nexus One is also hardware produced by HTC. However using adb, it will list with serialid: HT9CRP805273 so HTC is not entirely hidden. It appears that internally the phone is referenced as the Passion and/or Mahimahi.

Thursday, August 20, 2009

Tweaking javac leniency

While a great fan of type-safety and static checking, I also like a certain amount of leniency from compilers. Sun's Java compiler is a good example of one that goes overboard in some anal, misunderstood attempt at servicing the developer.
In the following I will explain why I think so and how to fix it by modifying the publicly available source code for the compiler. Note that although I have done something similar before, I am no compiler expert and never will be. The modifications shown in this post are mere tweaks and I will stay far away from anything below semantic analysis.


"Exception is never thrown"

Have you ever had a good coding session, with design, implementation and refactoring happening at an immense speed as you try to perfect the internals of a piece of code? All of a sudden you are distracted because a try block no longer contains any methods that declares a checked exception. So now, in order to satisfy the compiler, you'll have to focus on removing these alternate, albeit dead and harmless paths. Indeed, you will have to carefully uncomment everything but the body of the try block.

As a concrete but contrived example, imagine something like this:



Settings settings = null;

try {
settings = Settings.loadFile("settings.conf");
settingsObservers.fireChangeEvent();

} catch (IOException ex) {
// Logging...
}
finally{
// Cleanup...
}




Now then imagine you need to test something real quick, and thus prefers to just new up the Settings() object or something similar, so you comment out the line where it loads from a file:



Settings settings = new Settings();

try {
// settings = Settings.loadFile("settings.conf");
settingsObservers.fireChangeEvent();

} catch (IOException ex) {
// Logging...
}
finally{
// Cleanup...
}




That won't work of course, because of the paranoid rules surrounding checked exceptions. Instead, we'll see the compiler complain:


casper@workstation:~$ javac ExceptionNeverThrown.java
ExceptionNeverThrown.java:13: exception java.io.IOException is never thrown in body of corresponding try statement
} catch (IOException ex) {
1 error



So in order to have it run, you meticulously comment out everything related to the try-catch-finally block:



Settings settings = new Settings();

//try {
// settings = Settings.loadFile("settings.conf");
settingsObservers.fireChangeEvent();
//} catch (IOException ex) {
// // Logging...
//}
//finally{
// // Cleanup...
//}



That's just stupid and gets even worse if you have any kind of propagating hierarchy in place! Static checking should be an assistance rather than an annoyance, this is clearly a case of the static dial being placed a tad too high.


Catching checked exceptions without a throw

We can fix this by getting our hands on the OpenJDK source, in my case I opted for the easy-to-use Kijaro sandbox, which Stephen Colebourne set up to encurrage this kind of hacking. Kijaro includes instructions on how to build javac for both Windows and Linux. If you want to try this kind of hacking yourself, you are going to need a checkout of this sandbox or a similar OpenJDK branch. Alternatively, if all you want to do is play with the tweaks I'll demonstrate here, you may just get a copy of my modified javac.

The semantic analysis parts of OpenJDK is largely contained in the package com.sun.tools.javac.comp, for our purpose we're going to need Flow.java hosting a bunch of data-flow analysis methods that's responsible for raising error conditions surrounding the use of checked exceptions. The compiler walks the AST of the source code through a double-dispatch mechanism (visitor pattern) that calls methods in Flow.java with the current AST node as argument. This means all we have to do is locate the proper callback and modify it according to our need. Down around line 951 you should see the following:



public void visitTry(JCTry tree) {
...

if (chk.subset(exc, caughtInTry)) {
log.error(l.head.pos(),
"except.already.caught", exc);
} else if (!chk.isUnchecked(l.head.pos(), exc) &&
exc.tsym != syms.throwableType.tsym &&
exc.tsym != syms.exceptionType.tsym &&
!chk.intersects(exc, thrownInTry)) {
log.error(l.head.pos(),
"except.never.thrown.in.try", exc);
}

...
}



Evidently, here's some logic that looks like it's logging an error if a checked exception is not being thrown (catch-list does not intersect with thrown-in-try-list). Let's change the line from logging an error, to logging a warning:




log.warning(l.head.pos(), "except.never.thrown.in.try", exc);




That's actually all that's needed in the compiler itself. However, note the obvious reference to a resource key "except.never.thrown.in.try". This is part of a reference to an entry in the file com.sun.tools.javac.resources.compiler.properties. If you open this you'll notice the following line:



compiler.err.except.never.thrown.in.try=\
exception {0} is never thrown in body of corresponding try statement



The key does not match the one from the log statement completely, as it is prepended with "compiler.err.". Since we changed the logging from an error to being a warning, javac will search in vain for an entry with the key "compiler.warn.except.never.thrown.in.try". As we can not simply fix this by changing the key reference in Flow.java, we are going to modify the existing, or add a new entry to compiler.properties:



compiler.warn.except.never.thrown.in.try=\
exception {0} is never thrown in body of corresponding try statement




Now compile Kijaro/OpenJDK and watch what happens when you use your new javac build to compile our previous Settings sample:


casper@workstation:~$ tweakedjavac ExceptionNeverThrown.java
casper@workstation:~$



We have successfully modified the compiler to make our life a little easier. There are a bunch of similar tweaks one could make, all in the same easy fashion as explained above. For instance, I have converted checked exceptions from being an error to being a warning (hint: errorUncaught on line 298), which means they don't get in the way of rapid development while at the same time not really losing any of the benefits - production code should compile without warnings anyway.
Likewise, I have made it so that the unreachable statement error is now also just a warning (hint: scanStat on line 493), thus allowing me to short-circuit a method or similar with a return statement without having me temporarily comment out the remaining code. To demonstrate all of this in one go, take a look at the following sample code:



import java.io.IOException;

public class TweakedJavaCTest{
public static void main(String... args){
// Test of "checked exception not caught" (throws InterruptedException)
Thread.sleep(100);
System.out.println("After sleep...");

// Test of "checked exception not thrown"
try{
System.out.println("Inside try...");
}catch(IOException e){
}

// Test of "unreachable statement"
return;
System.out.println("This will never be executed!");
}
}



With a stock javac, you won't get very far:


casper@workstation:~$ javac TweakedJavaCTest.java
TweakedJavaCTest.java:12: exception java.io.IOException is never thrown in body of corresponding try statement
}catch(IOException e){
^
TweakedJavaCTest.java:17: unreachable statement
System.out.println("This will never be executed!");
^
TweakedJavaCTest.java:6: unreported exception java.lang.InterruptedException; must be caught or declared to be thrown
Thread.sleep(100);
^
3 errors



In fact, we have no artifact to run. With the tweaked compiler it's an entirely different matter however:


casper@workstation:~$ tweakedjavac TweakedJavaCTest.java
TweakedJavaCTest.java:12: warning: exception java.io.IOException is never thrown in body of corresponding try statement
}catch(IOException e){
^
TweakedJavaCTest.java:17: warning: unreachable statement
System.out.println("This will never be executed!");
^
TweakedJavaCTest.java:6: warning: unreported exception java.lang.InterruptedException; must be caught or declared to be thrown
Thread.sleep(100);
^
3 warnings



Since we've reduced the previous errors to now being just warnings, we'll get an actual build which we can run:


casper@workstation:~$ java TweakedJavaCTest
After sleep...
Inside try...
casper@workstation:~$



Voila, pretty easy eh? If you want to play with this javac build you can grab it here. To compile a Java source file with this javac build, you need to use it this way:


casper@workstation:~$ java -Xbootclasspath/p:tweakedjavac.jar -ea:com.sun.tools -jar tweakedjavac.jar TweakedJavaCTest.java



In conclusion

I'm actually surprised at how easy it was to make these small tweaks. While some people clap their hand at checked exceptions, I think this more lenient version is how the Java compiler should have behaved from day one. The obvious drawback is that you are required to build and distribute your own javac which won't sit very well in many organizations, even if you could still use it as a less-hassle development tool for yourself. The other issue is that of IDE support, although it should be relatively easy* to plug this javac into NetBeans, other IDE's rely entirely on their own parser.

It would be nice if tweaks like these would be considered for the official JDK7, since it doesn't actually break backwards compatibility. However, Sun is an extremely conservative company and has not shown interest in fixing or evolving the compiler over the last couple of years.
Perhaps the way forward is an alternative approach, which does much the same, but without requiring a modified compiler. Reinier Zwitserloot from the Lombok project is dabbling on such an approach which you might want to check out.


* I did give it a quick try, packing up a tools.jar and placed in the JAVA_HOME/lib folder and making sure NetBeans were using this platform for my project. However it did not work as expected. While I was able to build inside NetBeans, the syntax highlighting did not pick up on my modifications.


Update

I have since added this to Kijaro under the branch leniency (you'll need a java.net login).

Thursday, August 6, 2009

Android book roundup

While it may no longer be so much for reference purposes as in the past, I still really like to have good books at my disposal as a software developer. The DPI of a book still beats that of any screen and I can bring a book to the restroom without raising eyebrows from my girlfriend. In the following I will come with a brief but hopefully useful review of the 3 Android books I have in my library.

O'Reilly's Android Application Development


The book covers Android 1.1 stuff and is thus already a little dated. The content and organization is a odd and lackluster I find, for instance there's a chapter on signing and publishing your application which comes before chapters on basic views and widgets. It also doesn't come with an ebook/PDF so there's really not a whole lot going for this book, you are probably better of looking elsewhere.





Manning's Unlocking Android


Also covers 1.1 stuff, but better content and definitely better organized than the O'Reilly book. It comes with ebook/PDF as well as source code. It does make certain assumptions regarding the skills of the reader (for instance how the Eclipse IDE works, as examples are Ant based and needs to be imported) but I reckon that most readers would be quite satisfied with this book. It has foreword by none other than Java Posse's chief editor Dick Wall.




Commonware's The Busy Coder's Guide to Android Development


This is the most recent published book in this mini roundup. I only have the ebook/PDF version so I am not 100% sure whether the dead tree version covers 1.5/Cupcake as well. You actually get 3 books, incl. one on advanced topics such as reading sensory and camera data. It works by 1 year subscription basis and without DRM, so you are likely to also get 1.6 and 2.0 coverage by simply downloading the books at a later point in time when they have been revised. The material itself feels very to-the-point, yet it's actually the only book I have come across that provides a thorough introduction to lists - an essential part of any Android application.

In conclusion

It's my humble opinion that all the above books favors XML layout a bit too much, as it makes examples harder to read/type - and lets face it, it's limited how complex a layout will be on such a small screen anyway. This appears to be more or less dictated by the architecture of Android so not much to do about that. If you have developed in JSF vs. Wicket/GWT then you know what I'm talking about. :)

If you only want one book on the subject, "The busy coders guide to Android Development" would be the one to go for. The content, organization and presentation is just unmatched by any of the others, though "Unlocking Android" was by no means a bad book. However, "Android Application Development" was a disappointment.


Update
Since writing this initial entry, Romain Guy from the Android team have informed me on this thread, that XML is favored in order to have multiple resources set up declaratively such that Android can automatically select the best match depending on hardware and environment. That makes a lot of sense of course, even if I still think this is an aspect better introduced to the reader after having tried some basic UI building with Java in order to even out the initial learning curve. After all, Java provides beginners with familiarity, type-safety and code completion. Although I've talked to other developers who share this opinion (slides from a Danish JUG meeting), yours may of course vary.

Friday, July 10, 2009

DDMS on Ubuntu 64bit

The Android SDK comes with a pretty nice suite of tools, one of these is the Dalvik Debug Monitor Service (DDMS). However, it seems like the Android SDK is distributed with some 32bit SWT libraries so if you are on a 64bit Linux, running a 64bit Java default, you might run into this little message:


casper@workstation:/$ dbms
43:27 E/ddms: shutting down due to uncaught exception
43:27 E/ddms: java.lang.UnsatisfiedLinkError: /android-sdk-linux_x86-1.5_r2/tools/lib/libswt-pi-gtk-3236.so: /android-sdk-linux_x86-1.5_r2/tools/lib/libswt-pi-gtk-3236.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1778)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1687)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1030)
at org.eclipse.swt.internal.Library.loadLibrary(Library.java:123)
...


This is fairly easy to fix, by first installing the 32bit version of the JRE:


casper@workstation:/$ sudo apt-get install ia32-sun-java6-bin


And then making sure DDMS is using this particular one. There are countless ways to do this (add new environment variables etc.) but I chose to simply modify DDMS itself, which seems like the least invasive way for this purpose. If you go to line 71, it should currently read:

java_cmd="java"


It's obviously relying on the /usr/bin/java symlink which we have in path, which leads to /etc/alternatives/java which again leads to our default 64bit version of the JRE. So change the line to this instead:

java_cmd="/usr/lib/jvm/ia32-java-6-sun/jre/bin/java"


Now DDMS should run so you can monitor, debug and take screenshots of your virtual or physical device.

Tuesday, June 30, 2009

Firefox 3.5 on Ubuntu 9.04

Well today the much anticipated speedy Firefox 3.5 arrived. Ubuntu probably won't push this one out as an update and I am not sure when Firefox's own auto-update would provide this upgrade, so if you're impatient as I, this is how to install it manually. Do read the whole thing through before deciding whether to do this, as there are some potential plugin issues involved!

Installation procedure
First you need to download and extract in an appropriate place. I'm extracting to /opt/firefox-3.5 such as to avoid collisions later when an official Ubuntu version is available:


casper@workstation/$ cd /opt/
casper@workstation/opt/$ sudo wget http://releases.mozilla.org/pub/mozilla.org/firefox/releases/3.5/linux-i686/en-US/firefox-3.5.tar.bz2
casper@workstation/opt/$ sudo tar -xjf firefox-3.5.tar.bz2
casper@workstation/opt/$ sudo mv firefox firefox-3.5
casper@workstation/opt/$ sudo rm firefox-3.5.tar.bz2 -f


Perhaps you notice that the download is a i686/32bit Linux release, this is because Mozilla do not presently build amd64/64bit versions. For this, we will have to wait for a version from our distribution. Now, lets check Ubuntu's browser settings:

casper@workstation:/opt$ sudo update-alternatives --display x-www-browser
x-www-browser - status is auto.
link currently points to /usr/bin/firefox-3.0
/usr/bin/firefox-3.0 - priority 40
Current `best' version is /usr/bin/firefox-3.0.


As you can see on my machine, only Firefox 3.0 is installed. We can add the newly downloaded version as an alternative and give it precedence on the system by issuing:

casper@workstation/opt/$ sudo update-alternatives --install /usr/bin/firefox-3.5 x-www-browser /opt/firefox-3.5/firefox 50


Also, we can make it default:

casper@workstation/opt/$ sudo update-alternatives --set x-www-browser /opt/firefox-3.5/firefox
Using '/opt/firefox-3.5/firefox' to provide 'x-www-browser'.


However, it's still not default on the path. To make it so, Ubuntu needs to update its symlink from /usr/bin/firefox to /opt/firefox-3.5/firefox:

casper@workstation/opt/$ sudo rm /usr/bin/firefox
casper@workstation/opt/$ sudo ln -s /opt/firefox-3.5/firefox /usr/bin/firefox



Voila. Issuing "firefox" on the command-line should now start Firefox 3.5 just as Ubuntu's shortcuts should now also point to the new version. On initial launch, Firefox 3.5 will check your extensions and plugins, as well as use your existing bookmarks etc. from your users home as usual on Linux.

Caveats
Note that if you previously had the amd64/64bit version of Firefox and plugins, your plugins will no longer work! It is of course very easy to just use 32bit plugins instead. In my case, I just lifted the two plugins I am interested in (libflashplayer.so and libjavaplugin.so) from another 32bit installation, but you can simply use Synaptic package manager to install 32bit plugins if you do not have these already.

If you don't wish to have multiple versions and perhaps have already uninstalled the old 64bit 3.0 version, then you may simply place the files in ~/.mozilla/plugins/. However, if you wish to have both versions on your system, you should keep the 64bit stuff around and instead add the 32bit plugins locally to your 32bit Firefox 3.5 installation in /opt/firefox-3.5/plugins/.

It's annoying to be back to 32bit for the time being but I do love the speed of this new version, and I'm sure you will too.


Update: Turns out there's an easier way, explained in this blog entry: http://talkingincircles.net/2009/07/01/firefox-3-5-in-ubuntu-9-04-64-bit/


Android Debug Bridge on Ubuntu 9.04

To perform general debugging and install alternative ROM's on the HTC Magic in Ubuntu 9.04, you will need a little more footwork than the documentation mentions.

First, I assume you have installed the SDK and added it to path so that you can perform ADB commands from everywhere. Then, you need to have USB debugging enabled on the phone, do this by going under Settings > Applications > Development and check the "USB Debugging" item.

Now, add a udev rule for your device:


gksudo gedit /etc/udev/rules.d/51.android.rules


Add the following to the file and save it:


SUBSYSTEM=="usb", ATTRS{idVendor}=="0bb4", MODE="0666"


Reload USB devices by issuing the command:

sudo /etc/init.d/udev reload


Unplug and plug the device. Check if you can see the device:

casper@workstation:~$ adb devices
* daemon not running. starting it now *
* daemon started successfully *
List of devices attached
HT95PKF00221 device


There, now you can browse your phone with the adb shell command and start diving into the rather interesting world of rooting and alternative ROM's found over on XDA-developers. My next move is to try the HTC Hero ROM on my Magic, which will give the phone the new multitouch Rosie/Sense interface.

Another cool thing is that if you start your Eclipse SDK and try to run an application, it will now deploy to the physical device rather than the emulator - pretty neat!

Saturday, June 20, 2009

HTC Magic Android emulator skin

So I started playing with Android development, just received "Unlocking Android" and "Android Application Development" from Amazon. However before getting to do any real development, I got sidetracked in trying to create a skin for the Android Emulator that matches my new fancy HTC Magic. There actually exists one, but it's branded Vodafone, looks silvery and has the buttons all wrong so I decided this would be a good way to brush up on some Gimp skills. And here it is, I am no graphics artist so I am moderately pleased.



You may use images and skin as you please, under the creative commons license. You can download the skin by clicking here. To install, simply extract the files to {android_install_dir}/platforms/android-1.5/skins/HTCMagic/.

If you can not select the skin in Eclipse, you may have to edit the config.ini file of your virtual android device setting manually (under ~/.android/avd/), simply set the skin.path property to point to HTCMagic. You can also run the emulator manually by executing {android_install_dir}/tools/emulator and pass along the command line argument "-skin HTCMagic".

NOTE: I am aware that pressing the search button in the emulator has no effect. This appears to be because it is not implemented for the emulator at this time, since no other skin appears to have a working search button. I even read through the C and Python source code to see if I could find the proper key name, but with no success. Also note, the screen shown within the emulator in the above image, is not of the generic Android 1.5 (Cupcake) but rather a static image of HTC's Rosie/Sense UI which I find more sexy (can phone software look sexy?).

Wednesday, June 10, 2009

Android awesomeness

So I finally got an Android device, in the form of the second generation hardware from HTC known as the Magic, G2, Ion and Saphire. Since the original HTC Dream (G1) this phone received a major visual overhaul and now boosts a soft keyboard rather than a physical one making it a loss less clunky. And let me admit right away, I fell in love at once and have hardly put it down since. It is not my intention here to write any kind of throughough or balanced review, but merely to showcase Android and the applications to others contemplating getting one of these phones as well.

Initial impression
The phone looks and feels sturdy and quality, thankfully not branded like appears to be the case in the US. The 3.2" touch screen is glass just like the iPhone and the body is thick scratch-proof plastic. It has what amounts to 11 buttons (incl. a joypad) below the screen such as well as volume buttons on the side.
Having played with an iPhone a few times, my impression is that the Magic is very reminiscent of this groundbreaking device. I never actually invested in an iPhone as I don't care very much for the handcuffs Apple like to put on its customers - it's a somewhat different story with this phone. Also, to me the Magic feels better in the hand than an iPhone and definately slips easier down into a pocket. The battery is a 1340mAh Litium-Ion, bigger than the one in the G1 and enough to keep the phone powered for the day. It's important to note that it takes a few days for the battery to start charging correctly (to know its state) and it also takes a few days to learn how to use it such as not to drain the battery immediately but this is true of any smartphone.



Android
While the hardware is nice, the real kicker is Google's Android operating system. The HTC Magic comes with version 1.5 (Cupcake) as well as some extra software installed into the ROM by my phone carrier that enables Microsoft Exchange integration etc.
Android can have applications installed from either the Android marked or by directly downloading the .apk file. There's already around 5.000 applications available, and with so many phones coming out and a rapidly growing community I see no reason why Android would stay shy of iPhone's 50.000 applications.


You can opt to lock the device by a custom gesture. This is a very handy feature that beats the traditional approach of having to enter a code.





The home screen with widgets and shortcuts on the desktop, live picture of the Eiffel tower as wallpaper.





On the left home screen I have primarily system tools.






On the right home screen I have frequently used applications.






The applications menu pops up when you drag the slider up. The screenshot does not convey this information, but I have around 75 application installed.






Long-pressing the home button reveals other recently run applications, unlike the iPhone this device has no trouble multitasking.






At the top of the screen, the status panel can be expanded as well. It's here notifications of any kind goes (email, SMS, instant message) and it works remarkable well.






In horizontal mode, the soft keyboard is a joy to use thanks to the dictionary suggestions.






In vertical mode, you really need to be precise and having small fingers is definitely a plus.





Google applications


The build in browser, based on KHTML like Safari and Crome. It does its job to perfection, best mobile browser experience I have tried.






The gmail integration is equally impressive. Simple and functional.





Of course, Google maps is there as well and works with the GPS.





It feels special to be out and about, with satellite pictures looking down at your position.





Even street view is integrated.





Skymap uses all the sensors of the phone to render an accurate representation of the sky complete with stars, planets etc. depending at where you point the phone.






It plays mp4 in very decent quality, here Romain Guy from the Android team.






YouTube is available via an integrated application that simply works.





The camera used as a barcode scanner, here a book is scanned.





Voila, we have looked up a book and can read reviews and find cheapest stores.





The Android marked is full of stuff, approximately 5.000 apps. but growing daily.





aTrackDog can track all installed applications and tell you when there are new versions and provide an easy way to update.





GPSstats is good to determine GPS coverage.






CellTower can be used to triangulate and display cell towers around your neighbourhood. Not really useful, but fun to see in action.






FlightStats was great on a recent trip, always providing me with up-to-date info about delays.





There are of course countless games, though they are less interesting to me personally.





No matter the IM protocol you rely on, you are sure to be connected.





Yes believe it or not, there's even a metal detector.





NetCounter is invaluable in monitoring data traffic. It may save you from receiving a nasty bill.





SpareParts provides great system info as well as tweaks.





Speedtest is good to debug and determine connection speeds.





TaskManager lets you monitor and kill applications.





MyTracks tracks you as you move around. Here is my walking route from the office to the local subway.





Some metrics from the walk.





Not quite detailed enough to show me mowing the lawn. lol





The beauty of an open platform. You can run Mono/C# stuff on it.





...as well as Python, BeanShell and Lua.





...and execute shell commands since it's a real Linux underneath.




Conclusion
There are only a few thorns on this white rose. It would be nice if HTC would use a standard 2.5mm mini jack instead of the propriatary mini-USB plug although this is easily fixed with a small adaptor. Also, the speakers are not the loudest I've heard from a phone so I would've preferred if they faced the same direction as the screen such as to improve the video experience.

As for Android and the software it's even harder to find flaws. The menu where you see each and every application is perhaps a bit overwhelming, although you quickly get used to it (sorted alphabetically). To remedy this, people sometimes install home application extensions such as to allow more than 3 screens. This is something Google can easily adjust through the Android updates coming.

The real beauty of this device is how open it is. For instance, Android itself has not been ported to other languages than english yet, but that has not stopped people from developing localized soft keyboards that matches a particular locale (in my case Danish). Even if the phone is 200-300$ cheaper than an iPhone, it has more potential and longevity that I think only Apple fanboys will miss the slightly better polish put on by Apple. I'm already looking forward to the next upgrade to Donut in a few months, as well as Eclair by the end of the year and I'm eager to start developing on it myself.

Monday, May 25, 2009

Maven: Compile to a RAM-drive

While preparing to do a backup recently, I realized that I had over 2GB in 110.000 files which was the result of either an Ant build or Maven target. It occured to me that with Maven especially, this is nothing more than garbage to keep around, since Maven will copy the artefacts to its /.m2 folder anyway!

In a previous blog entry, I explained how to mount a RAM-drive on Ubuntu and use it as a development folder for much higher performance in the compile-test-run cycle. I have to admit, I don't use this for each and every project since it's a little complex and fragile.

But there's an interesting compromise to be had here. It turns out it is possible to instruct Maven to build artefacts to a RAM-drive. That way you not only gain some performance but also some automatic "garbage collection" in that by the next boot, you will have no traces of these build steps laying around. It would also be a very healthy thing to do if you're running from of an SSD drive, which are susceptible to wear-and-tear. The following will explain how to set this up with Maven.

Prerequisites
You are going to need a RAM-drive for this purpose. It is extremely easy to set one up on Ubuntu. In the following I set the permissions of the mount to that of the Ubuntu user "plugdev" which is the one being used for auto-mounting USB-drives etc. That way all users should have permission to use it:


sudo mkdir /media/ramdrive
sudo chgrp plugdev /media/ramdrive
sudo chmod g+w /media/ramdrive
sudo chmod +t /media/ramdrive
sudo mount tmpfs /media/ramdrive -t tmpfs


You can have your system automatically create and mount a TMPFS partition for you, by modifying your /etc/fstab file. Simply add the following to it:


none /media/ramdrive tmpfs defaults,user,size=1G,mode=0777 0 0


An alternative approach could be to have the RAM-drive mounted at compile time. This is perfectly possible by calling out to an Ant target in Maven, and have it execute a Bash script. The problem is though, in order to mount filsystems on Linux you need to be a super user. So for this approach to work, you would need to hardwire your SUDO password in the script and that would be pretty dumb. It is entirely possible there is a way to do it, but if there is I am unaware of it.

The POM modification
Basically all we need to do is instruct Maven where out build directory is, our output directory and our test directory. This can be done by simply specifying it within the build tag:



<project>
   <build>
       <directory>/media/ramdrive/maven-targets/${project.name}</directory>
       <outputDirectory>/media/ramdrive/maven-targets/${project.name}/classes</outputDirectory>
       <testOutputDirectory>/media/ramdrive/maven-targets/${project.name}/test-classes</testOutputDirectory>
       ...
   </build>
   ...
</project>


You'll notice I place everything under the sub-folder "maven-targets", this is so that none of my targets conflicts with other stuff on my RAM-drive. Also, the actual sub-folder for each individual project will be named the same as the project.

As Maven profile
While the above certainly works, it's arguably some heavy bending of Maven's conventions and we run the risk that other people not as fortunate as us (i.e. Windows users) won't have a RAM-drive nor the possibility to create one. To remedy this, we can encapsulate this custom behaviour in a Maven profile.

Sadly in a profile, we are not allowed to specify build paths like we just saw above. We can however cheat a bit, by going through some properties. Start by defining some global properties somewhere in a properties tag under the project tag:



<project>
   <properties>
       <build.dir>target</build.dir>
       <build.outputDir>target/classes</build.outputDir>
       <build.testOutputDir>target/test-classes</build.testOutputDir>
       ...
   </properties>
   ...
</project>


This complies with the standard Maven conventions. Then in the non-profile build tag, do like before but rather than hardwire the paths, use the properties we just defined. Like so:



<project>
   <build>
       <directory>${build.dir}</directory>
       <outputDirectory>${build.outputDir}</outputDirectory>
       <testOutputDirectory>${build.testOutputDir}</testOutputDirectory>
       ...
   </build>
   ...
</project>


The behaviour thus far should not differ in any way from the default Maven build cycle. The last thing we need to do now is to add a profile for emitting to the RAM-drive:



<project>
   <profile>
       <id>RAM-drive</id>
       <build>
           <plugins>
               <plugin>
                   <artifactId>maven-compiler-plugin</artifactId>
                   <inherited>true</inherited>
                   <configuration>
                       <source>1.6</source>
                       <target>1.6</target>
                   </configuration>
               </plugin>
           </plugins>
       </build>
       <properties>
           <build.dir>/media/ramdrive/maven-targets/${project.name}</build.dir>
           <build.outputDir>/media/ramdrive/maven-targets/${project.name}/classes</build.outputDir>
           <build.testOutputDir>/media/ramdrive/maven-targets/${project.name}/test-classes</build.testOutputDir>
       </properties>
   </profile>
   ...
</project>


That's it. Now you just select the profile "RAM-drive" in your IDE when you build, without it having negative consequences for your less fortunate collegues. Of course a similar setup can be made with Ant and most other build environments.

Saturday, May 16, 2009

The pain of request scoped JSF

While I like the idea of a component based web framework, JSF never really felt right to me. There are too many pitfalls and the whole programming model (even with the use of Facelets) feels cumbersome and more complex than necessary. I could probably write up a detailed list of reasons for why I think JSF is the least productive and fun web frameworks of those I know, but that is not the purpose of this blog post. Suffice to say this appears to be a shared sentiment. This post addresses only JSF's focus on server state and the associated overloaded POST aspect.

The problem of state
All programs that does anything remotely interesting, needs to keep some kind of state. In web applications, there are really only two places to put state and that is either on the server or on the client. Because HTTP itself is stateless, most frameworks stores state on the server in a session that is allocated the very first time a unique user visits. This is done either through an associated cookie or a sessionId that's passed along at all times. It's a simple approach that works reasonable well, but it also has some problems.

  • Scalability - When you do state, inevitably you start to consume memory that accumulates over time only to be cleared when your session times out. While memory and swap space by itself is relatively cheap, you can only throw hardware at it while your user count remains relatively low. Another problem comes to play when you decide to add more machines to assist in carrying the workload, since then you need to propagate out to a globally shared state between all the machines.

  • Complexity - The moment you do state, you have to think about concurrency and visibilty. Everything is essentially shared so opening several windows in the browser or using asyncronious (Ajax) calls can potentially wreck havoc in the state space for your session. Only with a debugger and non-trivial testing can you gain an overview of this aspect. It does not help that in JSF, there's (to my knowledge) no way to detect a new page load in a session scoped backing bean so you can't use that to do a partial "reset".

  • Unpredictable - Have you ever been working on something online, then picked up a phone call or gone to lunch only to return and receive an error message that says your session has timed out?. In JSF you often receive the more cryptic message "Could not restore view!". This happens because of the stateless nature of HTTP, there's no way for the server to know if a user is still sitting there in the other end, so it relies on a timeout for the session. There's a reverse proportional relationship between the size of this session timeout and the amount of memory consumed by the server, which is why the timeout is often kept down around 30 min.

I'm not totally against state on the server when it's used as a application level caching mechanism or for lightweight session info like language setting etc. Unfortunately it is all too common to see a heavyweight hierarchy from another layer (JPA entities comes to mind) creep into the user session and then it's bye bye to scalability.

JSF favors statefullness
Everything about the JSF programming model is clearly made with session scoped backing beans in mind. For instance, if you try to use various components in request scope, you will find it to be a major problem how events dispatch BEFORE your bean have had its properties filled out from the POST request. So, since JSF does not provide sufficient context, what people often do is to go behind JSF's back and grab the required context from the request parameters.

Example, say that you have a JSF page that displays some info about a customer entity. You would like an easy way to test and interface with legacy apps, so you allow a request parameter called customerId to be passed along as initial context. From within this page, you may perform various CRUD operations on the customer, which means you pass along this context in various ways, typically by including something like this in your POST-back form:


<html:form id="someform">
   <html:commandButton value="Show customer" action="showCustomer"/>
   <html:inputHidden id="customerId" value="#{customerBB.customerId}" binding="#{customerBB.customerIdHidden}"/>
</html:form>


What people do then in the backing bean, is to use lazy detection code to first try to fetch a property directly (if they know the form name) or from bound components. The latter approach would look something like this:


private static final String CUSTOMER_ID_PARAM = "customerId";
private Long customerId;
private HtmlInputHidden customerIdHidden;

public HtmlInputHidden getCustomerIdHidden() {
   
return customerIdHidden;
}

public void setCustomerIdHidden(HtmlInputHidden customerIdHidden) {
   
this.customerIdHidden = customerIdHidden;
}

public Long getCustomerId(){
   
if(customerId == null){
       
String customerIdString = getRequestParameter(CUSTOMER_ID_PARAM, false);
       
if(customerIdString != null && !customerIdString.isEmpty())
           
customerId= Long.parseLong(customerIdString);
       
else if(getCustomerIdHidden() != null){
           
customerIdString = getRequestParameter(getCustomerIdHidden().getClientId(getFacesContext()), false);
           
if(customerIdString != null && !customerIdString.isEmpty())
               
customerId = Long.parseLong(customerIdString);
       
}
   
}
   
   
return customerId;
}


Very verbose and fragile. And while JSF proponents will probably claim this is not the sanctioned way to use JSF, that's the kind of hacks I've seen on numerous occasions. In the following sections I'll describe an approach which is essentially a return to an action based framework where the idea is to separate functionality out into distinct pages and backing beans, with a controlled context flowing between these.

Managed properties
It turns out, you can instruct JSF to automatically initialize a backing bean property from both POST and GET requests. You do this via the managed-property tag in faces-config.xml, like so:


<managed-property>
   <property-name>customerId</property-name>
   <value>#{param['customerId']}</value>
</managed-property>


That still leaves us with the problem that in JSF, input variables are identified on the form [formname]:[inputname], so a <h:inputHidden> would not be injected into the beans customerId field. There's a way around this though. In your form, simply escape back to basic HTML by using the <core:verbatim> tag. This prevents JSF from modifying the tag name and id:


<core:verbatim>
   <input type="hidden" id="customerId" name="custumerId" value="#{customerBB.customerId}"></input>
</core:verbatim>


Overloaded POST's with GET redirects
A strongly associated aspect of the state on the server, is how everything in JSF is done through POST's. That presents a particular problem to JSF authors who wish to write in request scope, since an URL no longer expresses any of the necessary minimum state needed for the website to "live in the know". Hitting refresh in the browser won't work. What I do is to peek over at the RESTfull architectures, and only use POST for actions that causes state to mutate (updating or creation), and GET for the remaining idempotent actions.

For instance, rather than using a <html:commandLink> and navigation rules to go to an edit page for an item in a list, I'll use <html:outputLink> that points to the JSF page itsef along with the necessary context:


<html:outputLink value="customer-edit.jsf">
   <html:outputText value="Edit"></html:outputText>
   <core:param name="customerId" value="#{customerBB.customerId}"/>
</html:outputLink>


Note that this actually has nothing to do with the form, it simply expands to a hyperlink which is why in this case we won't need the <core:verbatim> tag.
The second part to this is to ensure we always emit the new context from the logic in the backingbean, that means for mutating POST operations to forward to a GET. This can be done by manually navigating to a result URL rather than returning a navigation rule. The following could be an example of what to do after a save operation was invoked on the consumer-edit.jsf page.



FacesContext.getCurrentInstance().getExternalContext()
.redirect("customer-view.jsf?customerId=" + getCustomerId());



In conclusion
JSF makes it very hard to avoid state on the server, but I hope this entry will have proven that it is possible to compose simple CRUD pages with this objective in mind. It's arguably bending JSF beyond its intentions, but if you have had your share of problems with JSF as I have, perhaps this classic way of thinking in request parameters will let you utilize some of JSF's nice looking components in a simple way without walking down the dreaded statefull alley. As a side effect, it becomes trivial to document the state space and assert preconditions of your pages.

The approach probably does not scale beyond relatively simple pages although I could claim adherence to the Separation of concerns principle. So if you command your own toolchain, I would urge you to have a look at Wicket (for classic broad-spectred web apps) or GWT (for data-intensive intranet apps) instead.