Duck typing in Java

Duck typing in Java

According to wikipedia, duck typing is when "an object's suitability is determined by the presence of certain methods and properties (with appropriate meaning), rather than the actual type of the object". Duck-typing thus is almost the definition of a dynamic language like Ruby, Python, Groovy etc. Unlike a hybrid like i.e. C# 4 (thanks to it's dynamic modifier), Java's type-system does not allow duck-typing - it's an object oriented paradigm where polymorphism is meant as a static modelling mechanism of a type hierarchy. However, the dynamic proxy feature introduced with Java 1.3, does allow us to emulate duck typing. First a disclaimer though, I am far from the first to blog about the subject, even Brian Goetz (now Java's language architect) blogged about it back in 2005.

Dynamic Proxy

It was back in 2000 that Sun introduced the Dynamic Proxy functionality to JDK 1.3. As the name implies, it caters to the well known Proxy pattern and it does so in a dynamic fashion. In short, a dynamic proxy makes it possible to create an instance of some interface dynamically at run-time. For many years, it has been the underlying work-horse of more advanced and exotic functionalities in Java frameworks for doing AOP, ORM, remoting, access control etc. Using a dynamic proxy can cause some controversy due to the dynamic nature, which is far from Java's traditional static type system. So right off the bat, let's simply write a small utility class (embedded DSL) that makes it easy to use the Java dynamic proxy feature:

public final class DuckType {

    private final Object source;
    
    private DuckType(final Object source) {
        this.source = source;
    }

    public static DuckType on(Object source){
        return new DuckType(source);
    }

    private class DuckTypeProxy implements java.lang.reflect.InvocationHandler {
        @Override
        public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
            final Method delegate = findMethodBySignature(method);
            return (delegate == null) ? null : delegate.invoke(DuckType.this.source, args);
        }
    }

    public <T> T as(Class<T> clazz) {
        
        AssertHaveCompatibleMethodSignatures(clazz);

        return asLenient(clazz);
    }

    public <T> T asLenient(Class<T> clazz) {
        return generateProxy(clazz);
    }

    private void AssertHaveCompatibleMethodSignatures(Class clazz) {
        for (java.lang.reflect.Method method : clazz.getMethods()) {
            if (findMethodBySignature(method) == null) {
                throw new ClassCastException("Not possible to ducktype " 
                        + source.getClass().getSimpleName() 
                        + " as " + clazz.getSimpleName() 
                        + " due to missing method signature " + method.toString() 
                        + ". If a No-Operation behavior is preferred, consider "
                + "calling asLenient(..) instead!");
            }
        }
    }

    @SuppressWarnings("unchecked")
    private <T> T generateProxy(Class<T> iface) {
        return (T) Proxy.newProxyInstance(iface.getClassLoader(), new Class[]{iface}, new DuckTypeProxy());
    }

    private Method findMethodBySignature(Method methodToMatch) {
        try {
            return source.getClass().getMethod(methodToMatch.getName(), methodToMatch.getParameterTypes());
        } catch (NoSuchMethodException e) {
            return null;
        }
    }
}

The above utility allows us to treat any object as an instance of some interface, even if it does not implement given interface. What will happen instead, is that the methods will dispatch dynamically and thus will lookup methods at run-time and forego any checking at compile-time. This has some interesting implications which can be useful in a mocking scenario, for decorating or for proxying.

Proxying

The majority of use cases, which also lays name to the API itself, is to act as a proxy - completely detaching types. A proxy is just an object that stands in place of another. It's not uncommon in a multi-layered architecture, to have many versions of some type, say a Customer class. In order to isolate each layer and avoid leaky abstractions, best practice is to wrap and copy data as they enter and exit each layer (the same goes for exceptions).


This practice not only becomes tedious to type, it also incurs a considerable repetition overhead on the code base which code quality tools (SonarQube etc.) and code quality organizations (SIG, BlackDuck etc.) will flag as suspicious. So while the layered design is nicely decoupled to only caring about one layer (and possible the layers immediately next to), it doesn't come for free.

The solution often seen in Java, is to add a vertical interface layer (you'll typically recognize these as *-api projects) for a cross-cutting abstraction layer, allowing some common super-type to be known throughout the system. Then, by referring only to the interface when throwing data around, nothing layer-specific escapes and no manual copying of data from one structure to another, needs to happen.


The problem with this approach, is that you better be damn sure you got it right to begin with (and we all known how great Ivory Tower designs work, right?), because changing interfaces later on is, by definition, a breaking change that requires updating all downstream implementations. So while the tight coupling does remove duplication, it does so at a cost.

This is where a dynamic proxy can come into play. Some languages are of obviously dynamic in nature, while a few modern languages introduced a hybrid approach, allowing certain corners of an application to be coded with less type checking from the compiler. Java does not offer such a cop-out natively, but we can however use dynamic proxies to achieve much of the same thing.

public class ProxyTest {

    interface Duck {
        public String speak();
    };

    class Cat{
        public String speak(){
            return "Miau";
        }
    };

    public static void main(String[] args) {
        new ProxyTest();
    }
    
    public ProxyTest(){
        Duck catTreatedAsDuck = DuckType.on(new Cat()).as(Duck.class);
        System.out.println(catTreatedAsDuck.speak());
    }    
}

Which will output the following to the console:

run:
Miau
BUILD SUCCESSFUL (total time: 0 seconds)

So you'll notice that calling speak() on the catTreatedAsDuck object, which is an instance of the Duck object, will invoke the speak() on the underlying Cat object - even if Cat really doesn't implement the Duck interface!

This example may seem a bit silly, but it demonstrates how to essentially "inject" an interface, and that can be very useful when you have to work with legacy code, auto-generated code or multi-layer architectures where you do NOT have a commonly shared contract/interface. I have used this approach before on production systems, where lots of compiler-compiler auto-generated types had to be treated by a lot of similar methods. Proxying these objects behind an interface, allowed me to remove a lot of redundant code in favor of DRY. The cost of doing this, is a bit of dispatch speed, but realistically speaking most won't even be able to measure any difference unless they do a huge amount of calls. Another cost, is that of loosing type-safety - there is no help from the compiler, so just as is the case with dynamic languages, having integration-tests becomes absolutely paramount!

Remote proxy

Another type of proxy which should also be mentioned, is the remote kind, where you use it as a communication mechanism out-of-process and possibly over a network. In 2007 I wrote a small framework called HttpRMI which is a super simple way of calling a remote method over HTTP, using Java's vanilla serialization mechanism underneath (note: today I would not use this, there are better alternatives like i.e. Hessian, Spring Remoting or Protocol Buffers), but it's implemented exactly the same way as the proxy above, except it's made to implement two separate client-server parts.

The server part is made out by DynamicProxyFactory and the client part by HttpRmiServlet. With these little helpers, we can use the Dynamic Proxy as a remoting mechanism as demonstrated below:

public interface SampleContract{
    public String getHello ();
}


public class SampleServlet extends HttpRmiServlet implements SampleContract{
    public String getHello(){
        return "Hello World";
    }
}


public class SampleClient{
    public SampleClient(){
        SampleContract contract = DynamicProxyFactory.create(SampleContract.class, "http://localhost:8080/SampleServer/SampleServlet");
        System.out.println( contract.getHello() );
    }
}


run:
Hello World
BUILD SUCCESSFUL (total time: 0 seconds)


So the SampleContract on the client is obviously not the same SampleContract as is running on the server, but for all practical purposes, there's no way of knowing this on the client-side. This is generally a sanctioned way of using the dynamic proxy.

Dynamic dispatch can be ok

The dynamic proxy can of course be combined with other design patterns and also for stubs, although with modern dependency injection and mocking frameworks (that handles more than just interfaces), it isn't used much for this. Using dynamic dispatch in a static Java environment, is by definition a bit controversial. I know of at least one code quality audit organization that opposes using a dynamic proxy because it's "complicated" (I'm looking at you SIG), even after showing how much redundant code can be removed from the code base.

My view on this is less black-n-white and I tend to appreciate "static when you can, dynamic when you must", to quote Anders Hejlsberg, chief architect of C#. Having just been on a large enterprise project using Grails and Groovy, where bits and pieces can blow up anytime when you hit "Run", I definitely favor static modelling and compile-time checking from compiler and IDE. However, some aspects of an application can indeed benefit from dynamic dispatch and I'll argue that layer boundaries within an application are a good candidate for this. Whether you consume a web-service, parse an XML file, talk to a database etc., you need to have sufficient testing in place between the layers anyway. There is also a good chance that, if abstractions have been broken down accordingly, the interface is made out of larger but fewer calls rather than many smaller ones. In other words, there shouldn't be any observable run-time cost associated with doing dynamic dispatch between layer boundaries so it remains more of a theoretical cost than a real one.

What about you, agree or disagree with such a hybrid approach? Let me know in the comments why! :)

Comments

Popular posts from this blog

Oracle SQLDeveloper 4 on Debian/Ubuntu/Mint

Beware of SQLite and the Turkish Locale

Rejsekort Scanner