30 November 2011

Stanford online courses

Stanford University offered several of their most popular computer science courses to the public this fall, online for free. The courses were so popular that Stanford’s doing it again in January.

Design and Analysis of Algorithms
http://www.algo-class.org/

Computer Security
http://www.security-class.org/

Computer Science 101
http://www.cs101-class.org/

Machine Learning (one of the offerings this past fall)
http://jan2012.ml-class.org/

Software as a Service
http://www.saas-class.org/

Human-Computer Interaction
http://www.hci-class.org/

Natural Language Processing
http://www.nlp-class.org/

Game Theory
http://www.game-theory-class.org/

Probabilistic Graphical Models
http://www.pgm-class.org/

Cryptography
http://www.crypto-class.org/

And here is an interesting link to the MIT Free online Course Materials and an article about them

13 November 2011

Pomodore Technique - Can you focus for 25 minutes?

by Staffan Noteberg
(Java2Days 2011)

The Pomodoro technique is an interesting approach for improving your focus while working. The technique is based on several concepts:
  • the conditional reflex - having an established rhythm is important as it establishes a safe and expected environment and thus improves focus
  • the attention deficit trail - humans aren't actually able to do multitasking;the workbech and archive example illustrates that - the mind can only work on one thing at a time and whenever you change the task you need to access the archive in order to move the new task on the workbench and store the results from the old task to the archive; bottom line is that when doing more than one thing at a time you actually do a very expensive switching;

What the technique offers is a self improving process based on the Deming cycle:
  • Act phase - when you work on improving the process by analysing the data received from the check phase
  • Plan phase - when you plan the next iteration of work (according to list of tasks, their priorities and dependencies)
  • Do phase - the phase where you actually do your work :) according to the plan and technique established by the previous two phases
  • and finally the Check phase - where various data about performance of the current process of work is recorded for the use of the act phase

The resulting technique is described very well on the following graphic:


It is quite self explainatory, but in case you want any more help you can check up Staffan's web site.

Other lectures from the Java2Days conference

Modern Annotation - Based Code Generation with Jannocessor
by Nikolche Mihajlovski

JAnoccessor is basically a very powerful code generation framework that is entirely configurable via annotations. Some of the cool stuff mentioned is very clean generated code, built on top of the Java APT and also very sophisticated template engine.

Taking your Application to the Cloud
by Josh Long from SpringSource

The lecture was basically a presentation of VMware CloudFoundry. Some of the interesting points was the cloud developer's bill of rights and the example of how CloudFoundry is used. Some of the pros of using a cloud development platform:
  • instant access as long as there is Internet access
  • distributed applications need distributed data
  • much easier configuration - no need to environment specific variables etc.
  • data store management in a cloud way, abstraction from actual database implementation

What's new in the Spring Integration Framework 2.1
by Oleg Zhurakousky from VMware

The book Enterprise Integration Patterns by Gregor Hohpe was identified as the primary inspiration behind the Spring integration framework. The framework build up on messaging and uses the above mentioned book as a specification.

The architecture is build around the three basic notions - that of a endpoint (also referred as filter), that of a channel (also referred as pipe, both point-to-point and publish-subscribe) and another of payload (using a key-value map for header).

Endpoints could be one of the following types:
  • Transformer - such an endpoint that converts the payload
  • Filter - discarding messages based on a certain conditional statement
  • Router - determining a new channel for the different payloads
  • Splitter - generating more than one message from one input message
  • Aggregator - assemble one message from several other

One thing to note is that the framework completely embraces the newly introduced Java future methods.

For more detailed example I highly recommend the video tutorials on the web site of the Spring Integration framework.

Domain-specific languages
by Igor Stoyanov from VMware

Again a good source of reference is the book Domain Specific Languages by Martin Fowler.

The discussion started by giving an example with the hammer and nail problem : "If you have a hammer you intend to see every problem as a nail". In terms of programming languages using a specific programming language shifts the focus from the actual problem that needs to be solved. On the other hand using a very generic language would mean reinventing the wheel most of the time.

The curve of abstraction is another very interesting overview of how the modern languages are changing.

What is important for DSLs is that context is the King - a very good example was given by the Starbucks orders shortening, e.g. whenever you ask for a specific product and its specific contents the order would be given in a very short way, that would otherwise have no meaning, but since it is in this context it would improve readability. From this perspective any DSL would be heavily reliant on the context it is used in.

A very good definition was given for why successful DSLs are successful - basically what they do is combine example execution with program execution.

There was also a short note about Language oriented programming - which is something I am very familiar with. In short this is the topic of using a general purpose language to define a more narrow DSL. From my perspective this is an excellent solution for providing non-technical QA specialist a way to do automation testing without having to increase their specialization in development.

You can follow Igor by subscribing to his blog.

07 November 2011

Java FX 2.0

by Michael Heinrichs from Oracle
(Java2Days 2011)

"Simple and powerful"

This talk gave a very good incite of the latest release of the Java FX framework. The framework itself has an impressive release timeline. The 2.0 version is a new start - it was decided that is would be a "pure Java" framework in order to reduce the learning curve.

Each JavaFX application has a strict hierarchical structure (scene graph). The leafs of the graph are shapes, images, text, web view (an interesting feature allowing to embed web content such as for instance a Google Maps view), media, controls and charts (a great emphasis here, as charts are very sophisticated). The parent nodes on the other side are components that allow grouping (invisible to the user, allowing to treat several components as one), regions (the same but including a border with some color), layouts (vertical, horizontal, etc) and animation (that is typically applied on a parent container).

The animation API is quite sophisticated - animations could be played in loops, grouped in a sequence, played in parallel or a combination of these techniques. There is also a feature called timeline that provides all the required utilities for a very complex animation. The framework would calculate the complex movement that is described by a simpler definition (see example here).

Also worth mentioning is the very easy to use media API and nice (but limited) UI controls. The good news here is that the Java FX framework is going to become open source with some custom UI components already being created by the community.

An interesting note here - Java FX, as well as Flex, also supports CSS (which is something I am still not really sure I understand).

Lastly, but probably with the most importance is the very advanced binding model that Java FX supports. It builds on top of the typical Java POJO getter / setter concept, but the framework takes it even further by providing wrapper objects for each common data type. These wrappers contain listeners that allow for both unidirectional and bidirectional bindings.

You can follow the author by subscribing to his blog.

06 November 2011

Enterprise Java in 2012 - a Spring Perspective

by Jürgen Höller from SpringSource
(Java2Days 2011)

The presentation was centred around what the recent trends in Cloud computing and Java are and how this affects the future of the Spring framework. So what's new in 2011? In 2011 several new server releases were brought out : the Glassfish 3, JBoss 7, WebSphere 8 and Tomcat 7 (Servlet 3 based). There were a number of cloud platforms released, including Google App Engine (Jetty ++ based), Amazon Elastic Beanstalk (Tomcat ++ based) and VMware CloudFoudry (Tomcat ++ based).

From a Spring perspective there were challenges in several key areas:
  • Datastores are now more diverse, there is hardly any standardization and the concept of a distributed cache is very widely applied. Relational databases are no longer the main choice here and would probably become just one of the possible solutions.
  • Web clients have also changed quite a lot. Client side technologies have overwhelmed the old server side approach and now very few applications store any UI state. Even more - JSF is no longer the a great fit - more and more applications use a very lightweight JAX-RS / JAX-WS back end with the combination of some robust client side technology.
  • Concurrent programming has radically changed. If the challenges before were handling requests from multiple sources(and thus having to do synchronization, state handling, etc.), now the focus is rather the multicore world and being able to handle great loads on multiple processors. The new Java 7 released in the summer of 2011 provides the java.util.concurrent.ForkJoinPool class that could be used in these cases.
  • Scala & Akka are a new generation of programming languages that combine both OOP and functional development and are very suitable for the new concurrency model
  • Java EE 6 is no longer relevant as it takes care of yesterdays problems due to the slow expert group process. Even worse - the adaptation of the technology also takes time and as a result it is outdated, but the industry itself is undertaking rapid changes : cloud vs dedicated servers, alternative datastores vs relational databases, etc.
Java EE 7 tries to take tackle these issues by bringing in many updates and having a main focus on the cloud. The timeline for the umbrella specification is Q4 of 2012 and the full specifications would probably be ready no sooner than 2013.

It would allow activation of beans in specific environments (e.g. profiles) by the means of profile annotations. This would ideally help in cache abstraction, having specific cache set-up for each environment.

The Servlet 3.0 specification would finally allow for the removal of the XML configuration files (such as web.xml and even persisntece.xml) and would also support asynchronous request processing.

As a result Sping 3.1 & Spring 3.2 would have Java 7 as it's driver. It would make best use of the Java 7 JDK and also provide support for JDBC 4.1. There would be a fork/join framework with the use of a typical Spring style ForkJoinPoolFactoryBean. Even better - the fraemwork would be build up with having Java 8 in mind - closures and single abstract method (SAM) types would be considered.

03 November 2011

Engineering Systems in the Cloud

by Falk Kukat from Oracle
(Java2Days 2011)

In this presentation that was quite focused on the Oracle product line* (both in terms of hardware and software) Mr. Kukat gave a high level overview of what value and importance a cloud environment could have for a corporation. There was also a good graph of the usual steps that need to be taken in order to achive the goal of having such infrastructure.

The starting point of the lecture was the all to well known question about the actual meaning of a cloud. The answer to this question was given from the National Institute of Standards and Technology (NIST) definition of cloud computing that identifies several key criteria such as:
  • on demand self service
  • resource pooling
  • rapid elasticity
  • etc.

Another thing mentioned was cloud standards such as interoperability, provisioning, etc.

The emphasis of the entire presentation however (besides the whole pleiades of Oracle products presented) was the roadmap to a platform-as-a-service (PaaS). The roadmap suggested that each company needs to go through four distinct phases:
  • standartization - achieving better operation cost
  • consolidation - standartized toolset; better utilization;
  • improvement & performance - better resource management
  • actual cloud environment - the actual goal
The effort required to reach each of these phases is exponential - the major step for a company being reaching the improvement & performance phase.

There was also some talk about the infrastructure underneath. The message was clear - do not put effort into something you can buy at the same or lower price. An analogy was given with buying a car - one could easily buy the best parts that make up a car, but putting them together would not necessarily make a good car for a good price.

* I've intentionally left out all the topics related to Oracle products