Development Hygiene

I recently attented three keynotes: Alan Kay and Andreas Kuehlmann at TOOLS and Jeff Kramer at ICSE. Hearing these talks, the software industry seems to be stuck: we don’t know how to move beyond manual development and testing of fine-grained software artefacts. However, as software engineering matures over time, a body of best practices emerges that increases quality of development. For instance, Jeff Kramer showed that certain ideas from the architecture community had indirectly made their way to industry under another form, e.g. dependency injection.

Andreas Kuehlmann used the word hygienic to refer to teams with sound engineering practices. The wikipedia definition of hygiene is a “set of  practices employed as preventative measures to reduce the incidence and spreading of disease”. High hygiene standards favor clean code. Assertions, contracts, refactoring are examples of hygienic practices. Hygienic practices lower the risk that your software become a drity pile of code. I like this analogy with health.

Progress in health promotion didn’t happen in one day. Medical and everyday practices such as sterilization, quarantine, treatment of wastes, dental care take took time to be established. Next to these practices, the healthcare industry enjoyed several breakthrough such as the creation of vaccines, or organ transplants that enabled this remarkable progress. Progress in development is similar to progress in healthcare.

  • Testing hygiene. Unit testing is similar to the daily routine of brushing ones teeth after each meal. It’s a defacto standard practice. Testing hygiene prevents bugs.
  • Code quality hygiene. IDE with code styling enforcement prevent unreadable code from spreading.
  • Modularity hygiene. A vast set of principles help keep software modular and prevent coupling and big balls of mud.
  • I/O hygiene. Sanitize your inputs and protect yourself from a malicious external environment. Don’t disclose sensitive information to the outside.
  • Monitoring hygiene. The first step to stay healthy is to identify problems and remedy before it’s too late. Yearly check-ups are becoming more and more common to detect symptoms of diseases. The same is true for applications, with monitoring, stress tests, and periodic reviews.

Hygiene refers to daily common pratices. Healthcare directly improves with improvement in hygiene, but also needs expert at time. When somebody is sick, he is sent to the hospital to follow a cure. For software, when an unreliable component is detected, it is put in quarantine, then analyzed and fixed with proper tools and techniques.  Expert troubleshooting is the equivalent of surgery.

Finally, what makes software remarkable is that similarly to organisms, software is grown, not built. Also, organisms and software are highly dynamic systems. This makes the analogy even stronger.

Hygienic programmers grow clean code and run healthy software.

MORE

Using Multiple Google Calendars with iOS 5

With Google calendars, you can create additional calendars linked to your account. This is convenient, say, to split your own events from events of others that don’t use any online calendar but that you want to track.

With iOS 4, adding a google account would display only the primary calendar, not the auxiliarary ones. The solution then was to add them individually as WebCal calendars. The WebCal URL is somehow cryptic but could be obtained from the ID of the calendar found in the settings.

After upgrading to iOS 5, all events are duplicated n times, where n is the number of auxialliary calendars. Sounds like an aweful bug, isn’t it? Actually not, things only got better. The auxiliary calendars are now correctly supported.

Go to m.google.com/sync and select the auxiliary calendars you want to sync. The google calendars you selected will all appear under you google account on iOS 5 (maybe you need to recreate it, though). You can remove the spurious individual WebCal calendars safely.

Sandboxed Evaluation

JSON is a data exchange format, and a the same time is valid javascript. You don’t need a parser to read them, and you can use the almighty eval to interpret the data. This is however a security breach. Malicious JSON data could be provide and would be evaluted without further notice. Parsing is therefore still necessary to validate the data. There’s nothing really new there: reflection and security are known to be orthogonal.

It doesn’t need necessary to be the case, though. Reflection could be limited so as to allow what is necessary and deny malicious actions. In the case of JSON evaluation, one should prevent the access to global data, and the creation of new functions. That is possible, e.g. with a custom first-class interpreter. The data might still not be valid JSON data from point of view of the data format, and might contain expressions. More advanced “security” policies could specify execution protocols that would be forbidden, so as to prevent the executions of certain expression, or sequences of expressions.

But then comes the question of error handling. Isn’t it “too late” to catch errors at run-time? An error–even if cought in time–might leave the system in a invalid transient state from which it is hard to recover. Validating the data statically prior any execution seems therefore an easier approach. This is for instance the approach taking by the Java platform: mobile code is verified when loaded.

Checking for potential errors statically is however more restrictive than checking for errors at run-time. Similarly to type systems, valid programs or data might be rejected because of the conservative nature of static checking.

Going back to a dynamic perspective on security, we need a way to (1) catch errors at run-time and (2) recover from errors at run-time. To provide the same level of security as its static counterpart, such an approach should make sure that nothing can happen at run-time that can not be undone. Recovery blocks, transactional memory, and first-class worlds are all techniques to enable undo easily.

What can be recovered must match what is allowed. Undoing changes to memory will not be enough if one can read from I/O streams ; either I/O operations should be denied, or the recovery mechanism should be strong enough to safely undo I/O read and write if needed. Spawning threads might be allowed, if we are able to detect and escape from infinite loop and reliably cancel execution of untrusted code.

Let’s take for instance that case of Java Enterprise Edition (JEE). Applications deployed on JEE application server must obey such specific restrictions. Most constraints arise from the transactional nature of EJBs, and the fact that clustring should be transparent. JEE applications run in a managed environement and should not spawn threads, access static data, performing blocking I/O, etc. they should be side-effect free. The sandboxing facilities of the JVM do not suffice to enforce these constraints, and many of them are conventions. The run-time itself does not prevent messing with the infrastructure.

We need a configurable runtime with adequate programming language abstractions to define and enforce security policies at various level of granularities. Such a runtime should be flexible enough to enforce the safe evaluation of JSON data, and the safe execution of JEE applications.

On Parallel Class Families

I discussed yesterday with Raffael Krebs, a master student at SCG, who was disappointed of his design in Java that lacked extensibility. The main problem was that he wanted parallel class hierachies, which you can’t do easily in Java.

This problem isn’t new. Let’s illustrate the problem and explore a bit the various design in pure Java, then with existing language extension.

Below is a sample situation. The core of the model is composed of Document, Item, Text and Image. The Reader reads data from a file and creates the model. The client uses the model. To be generic, it must be possible to specialize the model with as little effort as possible, that is, provide reuse in a natural way. One would like conceptually to specialize (that is, subclass) the model for instance for HTML documents.

This conceptual view can not be achieved with Java. There are several problems: (1) multiple inheritance is not supported, (2) the Reader class still references the core model and will not create instance of the more specialized one, and (3) in most statically-type languages, parameters in a subclass must be covariant to those in its superclass.

The two first problems can be circumvented with interfaces and factories. The third one might require a downcast in add(), if specialized classes have added methods. If only existing methods have been overriden, traditional polymorphism is enough.

Generics provide a partial solution to the problem. Document can accept a parameteric type <A extends Item>, so that HtmlDocument can be defined as an extension of Document<HtmlItem> (see this answer for an example). The downcast in method add() disappears. Item is however abstract, and the system assumes there are two implementations of it, Text and Image. With generics, the class Document can not create instances of these with regular instantiations new Text() or new Image(). First it would not comply necessary with the parametric type <A extends Item>, second, these instantiations would not be rewired correctly to their corresponding class in the other family. If classes were first-class, one could at least define abstract static factory methods createText() and createImage() in Item. This is however not  possible in Java, and static method can not be overriden, because classes are not first-class.

In a dynamicaly typed language, the usage of interface vanished, as well as the third issue. However, in both case problem (1) and (2) remain.

The lack of multiple inheritance in this case leads to code duplication between the classes for which not inheritance relationship exist. To avoid such code duplication, delegation could be used and an instance of HtmlText could delegate to an instance of BaseText. As pure delegation is not supported, this require writing boilerplate forwarding methods, which is still clusmy. Another way to avoid code duplication would be to factor the duplicated code into traits or mixin that can be reused among unrelated classes.

No matter what, this design is ugly–and it is really found in practice.

Revisiting the three problems

The  problems discussed before can be rephrased into three separated questions:

– How can the reader produce the right kind of document (either basic, or html) without invasive changes?

– How can variations of a class hierarchy be created with minimal efforts and no boilderplate code ?

– How can the type system be aware of logical relationship between types, e.g. HtmlDocument always goes with HtmlItem?

The fourth problem

One fourth problem not depicted but that happens in practice, is that two class hierarchy relate to each other, but in slightly incompatible ways. This typically happens due to software evolution, when method are renamed, signature are changed, etc. An example would be to change the “data” in Item from binary byte[]  to base64 string. While certain kinds of changes can be accomodated by providing convertion functions (they act as wrapper), this bloats the design and prevent clean evolution.  Sometimes it is impossible to create an inheritance relationship between the two variants of the classes, which  means that whole graphs must be adapted back and forth from one representation to the other. We state the fourth problem as follows:

– How can graph of objects be converted from one logical representation to another easily, whithout that the convertion logic bloats the code?

The fifth problem

One fifth problem not discussed so far is actually a requirement:

– How can the new families be introduced a posteriori, without entailing modification or invalidation of exisitng code.

This problem is usually referred to as modularity.

Language constructs

These problems are clearly related to each others, as is showed by the example.  A definitive solution to all three of them is to my knowledge still not available, but there has been progress in programming language to provide means to tackle part of them.

  • Design patterns (strategy, factory)
  • Generics
  • Static & dynamic reuse mechanisms (delegation, traits, talents)
  • Family polymorphism
  • Virtual classes (newspeak/gbeta, dependency injection, first-class class)
  • Object Algebras
  • Local rebinding (classbox)
  • Open classes
  • Multi-dimensional dispatch (subject-oriented programming, context-oriented programming, worlds, namespace selector)
  • Non-standard type coercion (translation polymorphism in Object Team & lifted Java, expander)
  • Type versioning (type hot swapping, upgradeJ, revision classes)

It seems to me there is room for something actually missing that would solve the problem depicted above–or myabe it exist but I am not aware of it?

Links

Ownership for Dynamic Languages

Edit: the idea presented in this post was worked out into a consistent language feature and accepted for publication.

Ownership type systems have been designed for statically typed languages to confine and constraint aliasing, which in turn increases security and safety. Ownership type systems are conservative, and can only enforce properties that are known to hold globally and statically.

Rather than taking a static view on ownership, a dynamic view would be more natural–and accurate–and could also be applied to dynamic languages. The pendent of static typing in dynamic languages is testing. Ownership violations would raise run-time errors. The system would need to be tested to ensure it is free of ownership violation. This bears resemblance with contracts, and ownership could be seen as special kind of run-time contracts. Also, similarly to contracts, once the system has been exhaustively tested, run-time ownership checks could be disabled.

The ownership relationship between objects could be changed at run-time. While we can expect that most of the time, the owner of an object remains the same and is known at object creation-time, this must not necessary be the case. Whether an object can have multiple owners or not is an open question. If we restrict the system to one owner per object, the syntax could be as simple as:

anObject := MyClass newOwn. "create a new instance that will be owned by self"
anObject := MyClass new ownedBy: anOwner. "assigned an arbitraty owner"

Of course, only an ower that satifies the ownership constraint can be assigned, not any object. Once assigned an owner, the system would enforce no aliasing happen that would violate the ownership constraint. In practice, this means that assignments, and parameter passing must be checked at run-time. This is not trivial without entailing a high overhead in space and time.

Combined with resumable exceptions, ownership violations might not be fatal, and could be considered as special events one can catch when objects escape their owners. For instance, when an object escapes its threads, it could be adapted to become thread-safe, or objects could be frozen (i.e. made read-only) when then escape their owner. An variant would be be able to specify a treatment block per ownership relationship, that would be executed if the constraint is broken. By default, objects would have flag that indicates if the contraints holds (for sure) or not. Raising exception or other strategies would be built on top of that.

MyClass new ownedBy: anOwner ifViolated: aBlock. "treatment if constraint is violated"
aFlag := anObject isOwnershipEnforced.

There are few interesting questions around this run-time perspective on ownership:

  • Can certains patterns be better expressed with ownership?
  • Does the ownership information help program understanding?
  • Do objects have one natural owner in practice?
  • Do run-time ownership checking help spot bugs or increase safety?
  • How can run-time ownership checking be made practical from point of view of performance?

Ah! As usual, more questions than answers…

The Social Network

I wasn’t much involved or interested in social media (twitter and the likes) until I joined SCG a few month ago. I had a rather defensive attitude and wanted to have the smallest fingerprint on the web. For several reasons, I nevertheless started using Google Shared Link, Twitter, CiteULike and Stackoverflow to see how they worked.

I must admit that I kind of like them all, now that I overcame my initial resistance.  But what I liked most is the surrounding questions on the evolution of the society. Here is a bunch of points I’m questioning myself about these days.

Ranking, reputation and suggestion system

The heart of these systems is to identify the value that the community gives to certain person or item (value is vague, maybe relevance or credibility would be better). This value can be mined using information about the network, or number of visit, etc. or by requesting user to vote. Purpose of these systems is to be fair, objective and democratic. Such systems are however complex to create. You need to design a set of rules that fit the purpose as well as a set of counter-mechanisms to eliminate abnormal behavior that still slip in (e.g. robot visit, abnormal pattern in user vote, etc.).  Ultimately all such system have their own weakness. This wasn’t too a problem when we didn’t depend critically on such system, but this is now the case.

The value of our second life

How much value to give to the web presence of an individual? For instance, recruitement has already changed with the appearance of job sites first, but then of online CV. This tendency will continue and expand to all area of our life. We can expect in the future to have consolidated profile be used more and more prior to meeting people for real. You can’t just erase all that and start from sratch. This may seriously bias our opinion on people. Prejudges related to a our web presence may be hard to overcome. Our presence on the web will be a direct measure of our skills, as is the case for instance with stackoverflow QA and CV. Will this expand to other area? Will we soon see  sentences such as “10+ meme on twitter is a plus” for people working in PR?
•    How much should we trust this information?
•    What is the “critical mass” that these systems must reach to really work?
•    Does it represent the real soft- and social-skills of a person?
•    Can we really sum up people with numbers?
•    When will the first “single consolidate metric” appear that grades an individual according to its complete web presence?

Community vs. individual

The web was first driven by communities. People which contributed to the web, adhered to the value of these communities. However, if the tendency to expose single individual continues, there will more and more tension between the community aspects and the individual, selfish aspects. This tension isn’t new and has probably been studied since decades in sociology and psychology, but the expansion of this tension to the web is new.  And the effect is unknown.  Everybody will be an active player the Internet and not just a passive user, as during the past decade.  We can then expect much more friction and instability in these social web site. Or maybe not.
References

Nowhere to Hide: Assessing Your Work Reputation Online

Implementation of Semaphores

For the need of the experimental Pinocchio research VM, we needed to add support for threading and concurrency. We implemented green threads, a la Squeak and there is then no “real” multi-core concurrency going on. The VM relies on AST interpretation, instead of bytecode. With green threads, the interpretation of an AST node can always be considered atomic: no two AST node can be interpreted concurrently. This is unlike Java and its memory model, where individual bytecodes can be interpreted concurrently, possibly with nasty side-effects (e.g. manipulation of long is not atomic). Thread preemption can happen anytime between AST nodes evaluation.

How can we add support for semaphores?

The pharo design

The pharo design can be informally summarize like this: when a semaphore is instantiated, its counter is set to one. Whenever a block needs to be evaluated in an exclusive way, the counter is checked. If the counter > 0, it is decreased and the block is evaluated. If the counter = 0, the thread is suspended an added to the list of threads currently waiting on this semaphore. When the critical block has been evaluated, the list of suspended threads is checked. If there are not suspended threads, the counter is incremented to 1. Otherwise, one of the suspended thread is picked and resumed.

This implementation explains why Semaphore extends LinkedList. I’m not sure it’s the best design decision, because it’s not conceptually a list and the list protocol should not be exposed by a semaphore. It uses inheritance for implementation reuse, but composition would have been just fine here if the semaphore was internally holding a linked list (and maybe use a pluggable ownership type system to check that the list does not leak out of the semaphore…).

Also, semaphores must be created using the forMutualExclusion factory method. This method instantiates and initialize the semaphore to allow exactly one execution at a time (hence the term mutual exclusion), but nothing would prevent you from initializing the semaphore so that up to N blocks can be executed concurrently.

The respective code for wait and signal (which respectively decrement and increment the counter) are:

wait
 excessSignals>0
 ifTrue: [excessSignals := excessSignals-1]
 ifFalse: [self addLastLink: Processor activeProcess suspend]
signal
 self isEmpty
 ifTrue: [excessSignals := excessSignals+1]
 ifFalse: [Processor resume: self removeFirstLink]

They are however implemented as primitives. I suspect this is not for performance reason, but for the sake of concurrency correctness. These operation themselves need to be atomic. Implemented in Smalltalk, the thread could be preempted during one of these, breaking the semaphore’s design.

The test-and-set design

These two methods and the internal counter suggest that an implementation relying on more generic concurrency primitive is possible. Typical concurrency primitives for this are test-and-set or compare-and-swap.
 We’ve added a primitive testAndSet to Boolean, and implemented the Semphore with busy waiting (also sometimes called spin lock):
 
 critical: aBlock
 | v |
 "we spin on the lock until we can enter the semaphore"
 [ lock testAndSet ] whileTrue: [ PThread current yield ].
 "we evaluate the block and make sure we reset the flag when we leave it"
 [ v := aBlock value. ] ensure: [ lock value: false ].
 ^ v.

The design could be improved to no use busy waiting. Instead of yielding, the thread would be suspended and added to a list. In the ensure block,  the flag would be reset and one of the thread would be resumed. The resumed thread would however still need to testAndSet the lock to prevent that no other thread has entered the semaphore in the meantime, possibly delaying the thread forever. So if fairness is required, this algorithm is not optimal.

The bakery design

You can also implement critical section without the support of other concurrency primitives. The most famous one is probably Lamport’s bakery algorithm:

What is significant about the bakery algorithm is that it implements mutual exclusion without relying on any lower-level mutual exclusion.  Assuming that reads and writes of a memory location are atomic actions, as previous mutual exclusion algorithms had done, is tantamount to assuming mutually exclusive access to the location.  So a mutual exclusion algorithm that assumes atomic reads and writes is assuming lower-level mutual exclusion.  Such an algorithm cannot really be said to solve the mutual exclusion problem.  Before the bakery algorithm, people believed that the mutual exclusion problem was unsolvable–that you could implement mutual exclusion only by using lower-level mutual exclusion.

In our case with green threads, read and write are atomic because they are single AST nodes, but this isn’t necessary the case.

Here ends this little trip into basic concurrency. There is a rich litterature on the topic — which is truely fantastic — and we might explore and implement more sophisicated abstractions later on.

References

The Java Memory Model
Thread Synchronization and Critical Section Problem
A New Solution of Dijkstra’s Concurrent Programming Problem

Thoughts About Scrum

Since now more than a decade, lightweight methodologies and processes have flourished under the umbrella of the Agile movement.

Despite the claimed success of Agile from daily practitioners, I haven’t seen many researches showing evidence that the Agile concepts really worked as expected; by this, I mean that Agile could still lead to reproducible success, but for other reasons that the one pushed by the various methodologies.

Here are a few open questions, especially about Scrum, for which I couldn’t find a decent answer. (For other related digression of mine, see real-world software development methodologies and What is a feature?)

Wasting time and unnecessary pressure

Scrum is organized around the notion of product backlog. One central notion in Scrum is that once started, the priorities of the current iteration won’t change; developers can focus and organize their work according to what’s been decided to go in the iteration, without being constantly disturbed by changes.

When a necessary change is detected, we tend to perceive it as highly prioritary: “We absolutely need this and that as soon as possible”. The existence of the product backlog and the frozen iteration improves this situation. Any story can be added anytime to the backlog. The priority of all stories will be reconsidered together when the next iteration starts. The process itself then promotes some relativity in the decisions that are taken, which is welcome as to lower the unnecessary pressure that is frequently generated.

Even though I heard this from a Scrum practitionner, I never found any evidence of this. It would however be possible to mine product backlogs to verify whether the priority of the stories tend to follow the scheme mentioned above: most of them are first entered with high priority, which then lowers over time. Only few stories are really prioritary, and the process helps their selection and identification (we could say the the signal-to-noise ratio is improved).

Delivering added-value

Another pillar of agile methodologies is to move away from technology engineering to product engineering. The team should focus on what brings most added-value – what make the software valuable – instead of what constitutes the software technically.

We see this tendency in several concepts:

  • Scrum has a user story, which is the expression of a valuable feature from the point of view of the end-user.
  • FDD has features, expressed in the form “<action> <result> <object>” (e.g. Calculate the total of a sale), which directly represents the value of the feature as perceived by the end-user.
  • TDD is less clear about this (and I’m not even sure if it’s considered as agile). Still, a test case can be perceived as the expression of a comprehensive feature.

While nice on paper, it’s not always easy to organize the development according to only added-value as perceived in the final product by the end-user.

For instance, the possible asymmetry between the complexity of the user story vs. the complexity in term of development effort, may impose a user story to be broken down into technology-related tasks.

The internal development tasks also escape the added-value scheme. A refactoring has no added-value from the customer point of view. How is then internal development effort managed in the context of agile methodologies?

I never found any clear positioning of agile practitioners regarding such problems. An analysis of existing backlog or issue trackers would be valuable to provide an explanation about how people manage this.

The case of bugs

Shipping a software product with no bug is an illusion. Bug reports,  bug triage and then bug fixing is part of the software development process.

Bugs and defects do however not fit very well with most agile methodologies. Should the user story or feature be reopened when a bug is reported? Should the bug be wrapped in a new story or features?

There isn’t lots of description of how the main development effort and the bug handling fit next to each other.
A study of how this happens in practice, both from point of view of the process but also from point of view of the tools (e.g. do most company still have an issue tracker to let customer report defects) would be interesting.

How agile are you?

The central point to agile, is well, to be agile. That is, to be able to adapt rapidely to changing requirements or situations. This stems from the fact that we can not anticipate everything, and that we can actually only anticipate very little when it comes to software development. And it’s a lot more rewarding and effective to improve the ability to face  changes than to desperately attempt to improve our ability to predict the future.

That said, there is no metrics for the “agility” of a team. It would however be interesting to define such a metrics (even if not a really tangible one) to be able to track progress in term of agility. Such a metric could be based on a mathematical model of how long user stories remain in the backlog.

Conclusion

Despite the wide acceptance of agile methodologies and the apparent success of these one, how they are really used in practice remains vague. It is especially hard to know how much the actual processes diverge in practice from the somehow theoretical vision of each methodology.  The methodologies do also not always address the whole spectrum of the software development lifecycle, in which case companies are probably introducing ad-hoc customizations and rules to cover their needs.

Anti-if Programming

If are bad, if are evil. If are against object-oriented programming. If defeats polymorphism — These popular remarks are easier said than enforced.

Ifs can pop up for various reasons, which would deserve a full study in order to build a decent taxonomy. Here is however a quick one:

  • Algorithmic if. An algorithmic if participate in an algorithm that is inherently procedural, where branching is required. No much can be done for these ones. Though they tend to increase the cyclomatic complexity, they are not the most evil. If the algorithm is inherently complex, so be it.
  • Polymorphic if. A class of object deserve some slightly different treatment each time. This is the polymorphic if. Following object oriented principles, the treatment should be pushed in the corresponding class, and voila! There are however million of reasons why we don’t want the corresponding logic to be in the class of the object to treat, defeating object oriented basics. In such case, the problem can be alleviated with visitor, decorator, or other composition techniques.
  • Strategic if. A single class deals with N different situations. The logic is still essentially the same, but there are slight different in each situation. This situation can be refactored with an interface, and several implementations. The implementations can inherit from each other, or use delegation to maximize reuse.
  • Dynamic if. Strategic if assumes that the situation doesn’t change for the lifetime of the object. If the behavior of the object needs to change dynamically, the situation becomes even more complicated. Chances are that attributes will be used to enable/disable certain behavior at run-time. Such if can be refactored with patterns such as decorators.
  • Null if. Test for nullity is so common that it deserves a special category, even though it could be seen as a special case of another category.  Null if can happen to test the termination of an algorithm, the non-existence of data, sanity check, etc. Various techniques exist to eradict such if depending on the situation: Null Object pattern, add as many method signature as required, introducing polymorphism, usage of assertions, etc.

A step-by-step Anti-If refactoring

Here is a step-by-step refactoring of a strategic if I came accross . I plan to send it to the anti-If compaign and took then the time to document it. The code comes from the txfs project.

Let’s start with the original code:

public void writeFile (String destFileName, InputStream data, boolean overwrite)
throws TxfsException
{
FileOutputStream fos = null;
BufferedOutputStream bos = null;
boolean isNew = false;
File f = new File (infos.getPath (), destFileName);
if ( !overwrite && f.exists () )
{
throw new TxfsException ("Error writing in file (file already exist):" +
destFileName);
}

try
{
if ( !f.exists () )
{
isNew = true;
}
DirectoryUtil.mkDirs (f.getParentFile ());
try
{
Copier.copy (data, f);
}
finally
{
if ( isNew && isInTransaction () )
{
addCreatedFile (destFileName);
}
IOUtils.closeInputStream (data);
}
}
catch ( IOException e )
{
throw new TxfsException (“Error writing in file:” + destFileName, e);
}
}

Not very straightforward, isn’t it? The logic is however quite simple: if the overwrite flag is set, file can be written even if it already exists, otherwise an exception must be thrown. In addition to that, if a transaction is active, the file must be added to the list of created file, so that they can be removed in the transaction is rolled back later.  The file must be added even if an exception occurs, for instance if the file was partially copied.

What happens is that we have two concerns: (1) the overwrite rule and (2) the transaction rule.

Let’s try to refactor that with inheritance. A base class implements the logic when there is no transaction. And a subclass refines it to support transactions.

public void writeFile (String destFileName, InputStream data, boolean overwrite)
throws TxfsException
{
FileOutputStream fos = null;
BufferedOutputStream bos = null;
boolean isNew = false;
File f = new File (infos.getPath (), destFileName);
if ( !overwrite && f.exists () )
{
throw new TxfsException ("Error writing in file (file already exist):" +
destFileName);
}

try
{
// if ( !f.exists () )
// {
//    isNew = true;
// }
DirectoryUtil.mkDirs (f.getParentFile ());
try
{
Copier.copy (data, f);
}
finally
{
// if ( isNew && isInTransaction () )
// {
//     addCreatedFile (destFileName);
// }
IOUtils.closeInputStream (data);
}
}
catch ( IOException e )
{
throw new TxfsException (“Error writing in file:” + destFileName, e);
}
}

The transaction concern is removed from the base method. The overridden method looks then like:

public void writeFile (String destFileName, InputStream data, boolean overwrite)
throws TxfsException
{
try
{
super.writeFile( destFileName, data, overwrite );
}
finally
{
addCreatedFile (destFileName);
}
}

But we have then two problems: (1) we don’t know if the file is new, and it’s always added to the list of created file. (2) if the base method throw an exception because the file already exists and the flag is false, we still add it to the list of created file when we shouldn’t.

We could change the base method to have a return code (e.g. FileCreated and NoFileCreated). But return code are not generally a good solution and are quite ugly.

No, what we must do, is remove some responsibility to the method. We then split it into two methods createFile and writeFile. One expects the file to not exsits, the other the file to exists.

void writeFile (String dst, InputStream data ) throws TxfsException
void createFile (String dst, InputStream data ) throws TxfsException

(The method writeFile which takes the additional overwrite flag can be composed out of the two previous one )

So our simplified writeFile method looks like:

public void createFile(String destFileName, InputStream data)
throws TxfsException
{
try
{
super.createFile(destFileName, data);
}
finally
{
// we add the file in any case
addCreatedFile(destFileName);
}
}

Alas, there is still the problem that if super.writeFile fails because the file already exists, we add it to the list.

And here we start to realize that our exception handling scheme was a bit weak. The general TxfsException is rather useless: we need to refine the exception handling to convey sufficient information to support meaningful treatment.

void writeFile (String dst, InputStream data )
throws TxfsException, TxfsFileDoesNotExsistException

void createFile (String dst, InputStream data )
throws TxfsException, TxfsFileExistsAlreadyException

Here is then the final code:

public void createFile(String destFileName, InputStream data)
throws TxfsFileAlreadyExistsException, TxfsException
{
// I can not use a finally block because
// if failure happens because file already existed       

// I must not add it the list
try {
super.createFile(destFileName, data);
// we add the file if creation succeeds
addCreatedFile(destFileName);
}
catch (TxfsFileAlreadyExistsException e)
{
// we don't add the file if it already exists
throw e;
}
catch (Throwable e)
{
// we add the file in any other case
addCreatedFile(destFileName);
}
}
}

Conclusion

Anti-If refactoring is not so easy. There are various kind of ifs, some of which are ok, and some of which are bad. Removing ifs can imply changing the design significantly, including the inheritance hierarchy and the exception hierarchy.

A tool to analyze and propose refactoring suggestion would be cool, but for the time being, NoIF refactoring is probably in the hands of developers which can ensure the semantics equivalence of the code before and after the refactoring.

Links

http://www.technology-ebay.de/the-teams/mobile-de/blog/an-exercise-in-unconditional-programming.html

Getting Things Done 3.0

I started reading Getting Things Done, but then dropped it. I nevertheless quite enjoyed the points about (1) building an organizational system that we trust and (2) doing the action immediately if it takes less than 2 minutes (which I had heard many time before but didn’t know it came from this book).

The book is a bit old fashioned to me. We can argue that the concepts are general, but we are information worker now and the “basket” mentioned in the book have been replaced by web applications, USB keys, and iPhone.

Building then a modern organizational system that we trust is far from easy. We need to balance different dimension that impact our trust in a system, e.g. security, reliability, perennial data format, etc. Well, the usual non-functional requirements.

Ultimately, it’s a matter of time and risk: How long will I need to re-do the work? What are the consequence if I forget this or that?

In my previous job in an ECM company, I increased my awareness about such issues. Managing the massive amount of information produced nowadays is a challenge and business case. I remember I was astonished when I read these numbers for the first time:

Recent estimates show that a typical worker will take 12 minutes to process a single document. Nine of these 12 minutes are spent searching for, retrieving and refiling the document — meaning that only three minutes are spent actually using the information they’ve found.

The average office:
• Makes 19 copies of each document.
• Spends $20 on labor to file each document.
• Loses 1 out of 20 office documents.
• Spends $120 searching for every misfiled document.
• Spends $250 recreating each lost document.
• Spends $25,000 to fill a four-drawer file cabinet and $2,000 annually to maintain it.

The volume of paper documents that organizations must process has increased tenfold in the last five years. Increases in paper volume drive the cost of paper handling higher, which greatly reduces profit margins.

(from Document Management Overview.)

And for the story, I tried first to find this excerpt using a search for “Introduction to enterprise content management” on my hard drive. No result. I then decided to locate it manually. I went to Library > Intranet, communication & ECM and … it was there! It’s funny that even though I recalled what the book was about, the name “Document Management Overview” was completely different from what I tried. Ontology ftw!

Here is what composes my current organizational system:

Topic Decision Inception year
Email – Three mailboxes: one personal, one professional, and one for garbage.
Redirection address @a3.epfl.ch as the default for long-lived communication (e.g. contact)
– BCC myself in email (2012)
2004
Source code < 2010 self-administered subversion, assembla.com for private projects
– 2010-2013 Google code
– 
2013+ Github
2005
Archival – Convertion of esoteric format (.eps,.pic) to PDF
– Recording of minidisc and super8 into .mp3 and .mov
2005
Online CV – LinkedIn (since 2005)
– Stackoverflow (since 2010)
– Xing (since 2013)
2005
Application and S/N – Store the serial number in the same location as the application 2006
Backup strategy  RAID drives is not enough if I delete files by mistake.
– incremental backup on CD/DVD
– complete backup on external HD
-TimeMachine since 2012
2007
Digital music and picture – Not in the cloud
iTune for digital music (since 2009)
– All pictures organized by date and periodic backup
Gallery with Lightroom on personal website
– DropBox for picture sharing (since 2010)
2007
Information overload – In case of too much information piles up, I gargabe collect according to these rules. 2008
USB keys Be disciplined, copy the information on a computer asap and clear the key 2008
Agenda, birthdates and contact – Google Calendar
– Goolge Contact
2008
Web identity and avatar – Avatar on gravatar
– Two usernames: ewernli and wrnli
2008
Bookmark – Bookmarks are personal
– Usage of Foxmark with centralization on my server (not in the cloud)
Organized in a taxomony / since 2013 with tags
Pocket app (since 2017)
2008
Place to visit, music to listen, movie to watch – Various Google Documents
-IMDB and Jini for movies (since 2013)
-Goodreads for books (since 2013)
2009
Other To Do Leverage Google Calendar for actions which have deadline
– iPhone Reminder App
2009
Random thoughts – Twitter
2013 self-administered blog with Liferay
– WordPress blog (since 2013)
2009
News and RSS Google Reader
– Feedly
2009
Scientific literature – Usage of BibDesk as master copy
CiteULike (until 2017)
– Mendeley (since 2017)
2009
Password and login – Password-protected excel spreadsheet 2011

Getting Things Done is to business what No Broken Window is to software development. Whenever something comes in that requires some action, either do it right now, or at least enter it in a reliable system for later (In the case of development, the system is of course the issue tracker).

More Links