Package Visibility is Broken

In Java, classes and class members have by default package visibility. To restrict or increase the visibility of classes and class members, the access modifiers private, protected, and public must be used.

Modifier Class Package Subclass World
public Y Y Y Y
protected Y Y Y N
no modifier Y Y N N
private Y N N N

(from Controlling Access to Members)

These modifiers control encapsulation along two dimensions: one dimension is the packaging dimension, the other is the subclassing dimension. With these modifiers, it becomes possible to encapsulate code in flexible ways. Sadly, the two dimensions interfere in nasty ways.

Shadowing

A subclass might not see all methods of its superclass, and can thus redeclare a method with an existing name. This is called shadowing or name masking.  For instance, a class and its subclass can both declare a private method foo() without that overriding takes place. This situation is confusing and best to be avoided.

With package visibility, the situation gets worse. Let us consider the snippet below:

package a;
public class A {
int say() {return 1;};
}
package b;
public class B extends a.A {
int say() {return 2;};
}
package a;
class Test {
public static void main(String args[]) {
a.A a = new b.B();
System.out.println(a.say()); // prints 1, WTF!!
} }

 (from A thousand years of productivity: the JRebel Story)

The second method B.say() does not override A.say() but shadows it. Consequently, the static type at the call site defines which method will be invoked.

One could argue that everything works as intended, and that it is clear that B.say() does not override A.say() since there is no @Override annotation.

This argument makes sense when private methods are shadowed. In that case, the developer knows about the implementation of the class and can figure this out. For methods with package visibility, the argument is not acceptable since developers shouldn’t have to rely on implementation details of a class, only its visible interface.

The static types in a program should not influence the run-time semantics. The program should work the same whether the variable “a” has static type “A” or “B”.

Reflection

With reflection, programmers have the ability to inspect and invoke methods in unanticipated ways. Reflections should honor the visibility rules and authorize only legitimate actions. Unfortunately, it’s hard to define what is legitimate or not. Let us consider the snippet below:

class Super {
@MyAnnotation
public void methodOfSuper() {
}
}

public class Sub extends Super {
}

Method m = Sub.class.getMethod("methodOfSuper");
m.getAnnotations(); // WTF, empty list

Clearly, the method methodOfSuper is publicly exposed by instances of the class Sub. It’s legitimate to be able to reflect upon it from another package. The class Super is however not publicly visible, and its annotations are thus ignored by the reflection machinery.

Package visibility is broken

Package-visibility is a form of visibility between private and protected: some classes have access to the member, but not all (only those in the same package). This visibility sounds appealing to bundle code in small packages, exposing the package API using the public access modifier, and letting classes within the package freely access each others. Unfortunately, as the examples above have shown, this strategy breaks in certain cases.

Accessiblitiy in Java is in a way too flexible. The combination of the fours modifiers with the possibility to inherit and “widen” the visibility of classes and class members can lead to obscure behaviors.

Simpler forms of accessibility should then be preferred. Smalltalk supports for instance inheritance, but without access modifiers; methods are always public and fields are always protected. Go, on the other hand, embraces package visibility, but got rid of inheritance. Simple solutions are easier to get right.

NOTES:

  • In “Moderne Software-Architektur: Umsichtig planen, robust bauen mit Quasar” the author argues that method level visibility makes no sense. Instead, components consist of classes, which are either exposed to the outside (the component interface) of belond to the component’s internals and are hidden (the component implementation). This goes in the direction of OSGi and the future Java module system.

Masterminds of Programming

Masterminds of Programming51-8dA--hLL features exclusive interviews with the creators of popular programming languages. Over 400+ pages, the book collects the views of these inventors over varying topics such as language design, backward compatibility, software complexity, developer productivity, or innovation.

Interestingly, there isn’t so much about language design in the book. The creation of a language seems to happen out of necessity, and the design itself is mostly the realization of an intuitive vision based on gut feelings and bold opinions. The authors’ judgments about trade-offs (e.g. static or dynamic typing, or security vs performance) are surprisingly unbalanced, and when asked to explain the rationale for some design choices, explanation tends to be rather scarce.

Instead, the authors describe with passion the influences that led them to a particular design. The book contains thus a good deal of historical information about the context in which each language was born.

  • C++ was invented to enable system programming with objects
  • Awk was invented to easily process data in a UNIX fashion
  • Basic was invented to teach students programming
  • LUA was invented to easily script components
  • Haskell was invented to unify the functional programming language community
  • SQL was invented to query relational database with an approachable language
  • Objective-C was invented to bring objects to the C world
  • Java was invented to provide a secure language in a networked world
  • C# was invented as the strategic language for the modern Microsoft platform .NET
  • UML was invented as the unification of modeling languages
  • Postscript was invented to enable flexible typesetting and printing
  • Eiffel was invented to make objects robust with contracts

Both the interviewers and interviewees are knowledgeable and articulate. The inventors smoothly distill their experience and insights during semi-structured interviews. Throughout the book, discussions remain mostly general, which both a plus and a minus: the material is accessible to all, but multiple sections have a low information density. The book could be easily shortened with a better editing.

Discussion about software engineering in general turned out to be the one I enjoyed most. Some of the interesting ideas touched in the book were for instance:

  • Simulating projects help acquire experience faster, p.254
  • Classes are units of progress in a system, p.255
  • We need of an economic model of software, p.266
  • Object-oriented programming and immutability are compatible, p.315
  • What UML is good for: useful for data modelling, moderately useful for system decomposition, not so useful for dynamic things, p.342
  • Generating code from UML is a terrible idea, p.339
  • There’s no software crisis; it’s overplayed for shock value, p.354
  • How broken HTML is, and how better it would have been if the web had started with a typesetting language like postscript, p.405

These points come from the late interviews, but there are similarly nice bits and pieces in all chapters; it just turned out that I starting taking notes only half through the book.

Amongst the recurring themes, the notion of simplicity pops out and is discussed multiple times, at the language level and a the software level. Several interviewees quote Einstein’s “Simple as possible, but not simpler”, and emphasize the concepts of minimalism and purity, each in their own way.

The book is also very good at instilling curiosity about unknown languages. I was initially tempted to skip chapters about languages I didn’t know, and am glad that I didn’t. Stack-based languages like Forth and Postscript appear as examples of a  powerful but underlooked paradigm; the chapter about awk almost reconciled me with bash scripting; and the discussion about UML made me reconsider its successthe fact that the whole industry agreed on a common notation for basic language constructs shouldn’t be taken for granted.

In conclusion, this book isn’t essential, but it is enjoyable if you are an all-rounder with some time ahead, you appreciate thinking aloud, and good discussions around a cup of coffee.

The Cost of Volatile

Assessing the scalability of programs and algorithms on multicore is critical. There is an important literature on locks and locking schemes, but the exact cost of volatile is less clear.

For the software composition seminar, Stefan Nüsch and I decided to look at this topic. Essentially, we devised a benchmark where multiple threads would access objects within a pool. Each thread has a set of objects to work with. To generate contention, the sets could be fully disjointed, have partial overlap, of have a full overlap. The ratio of reads and writes per thread was also configurable.

On an AMD 64 Dual Core, the graph looks as follows:

bench_amd64On a i7 Quad Core, the graph looks as follows:

bench_i7We clearly see that different architectures have different performance profiles.

In future work, we could try to reproduce false sharing and assess the impact of other forms of data locality.

More details about the benchmark and methodology can be found in his presentation. The code in on github.

Here a some links about the semantics of volatile, and mememory management in general.

Understanding the Visibility of Side-Effects

This entry is all about the Java Memory Model and the happens-before partial order it formalizes.

The official specification of the JMM is authoritative but unreadable. Fortunately, there are useful resources around that aim at making it more accessible:

Happens-Before

The JMM defines a partial order called happens-before on all actions in the program. Actions are reads and writes to variables, locks and unlocks of monitors etc. The rules for happens-before are as follows:

  • Program order rule. Each action in a thread happens-before every action in that thread that comes later in the program order.
  • Monitor lock rule. An unlock on a monitor lock happens-before every subsequent lock on that same monitor lock.
  • Volatile variable rule. A write to a volatile field happens-before every subsequent read of that same field.
  • Thread start rule. A call to Thread.start on a thread happens-before any other thread detects that thread has terminated, either by successfully return from Thread.join or by Thread.isAlive returning false.

The happens-before partial order provides clear guidance about what values are allowed to be returned when the data are read. In other words: it formalizes the visibility of side-effect across threads.

A read is allowed to return the value of a write 1) if that write is the last write to that variable before the read along some path in the happens-before order, or 2) if the write is not ordered with respect to that read in the happens-before order.

Instructions can be reordered as long as the happens-before order is preserved. The code below

t1 = 1;
t2 = 2;
t3 = t1+t2;
t4 = 2*t1;

can for instance be be reordered as follows

t2 = 2;
t1 = 1;
t4 = 2*t1;
t3 = t1+t2;

If two threads shared data without using any synchronization, writes of a thread are considered unordered with respect to the other thread; according to condition 2), the execution is not deterministic and the writes of a thread might or might not be visible to the other thread.

Let us consider the two threads below, and one given execution:

  T1        T2

s = s+1 

          s = s+1

s = s+1 

The second increment might or might not see the previous increment in T1. Similarly, the thrid increment might or might not see the side-effect of the second increment.

In improperly synchronized programs, a read will return the value of one the possible matching writes, according to conditions 1) and 2). This corresponds to data races. In properly synchronized programs, a read has one unique ancestor write.

  T1        T2

lock m
  |
s = s+1 
  |
unlock m  
          \  
          lock m
             |
          s = s+1
             |
          unlock m
         /
lock m
  |
s = s+1 
  |
unlock m                  

Implementing Happens-Before

Conditions 1) and 2) specify that side-effects might or might not be visible consistently to theads in improperly synchronized programs. They enable the implementation to not read the shared memory all the time, either using CPU caches or compiler optimization. The specification does however never speak of specific implementation choice like caching and optimizations.

For instance, the code

t2 = s+1
t3 = s*2

could be rewritten as by a compiler optimization

t = s
t2 = t+1
t3 = t*2

, where t caches the value of the read to s.

Typically, the compiler will emit memory barriers to flush the CPU caches. The JSR-133 Cookbook provides guidance about implementation issues.

Data Races

Failure to properly synchronize accesses to shared data is called a data race. Inversely, a program is data race free if all accesses to shared state are synchronized. But what does data race freedom exactly mean?

Is the program data race free if,

  1. accesses to shared data happen within critical section?
  2. shared data is declared volatile?
  3. accesses to shared data are never concurrent?

Issues about the memory model are typically discussed using the double-checked locking pattern, and the “pausable” thread pattern. They provide only partial answer to these questions.

Point 1 is true. If the accesses to shared data are protected by the same cricital section, there is no possible data race.

Point 2 is true also. If you define variables as volatile, the side effect will be made visible to the other thread and there will be no data race. But remember: data race freedom does not mean that the behavior is correct. Consider the trival example below:

counter = counter + 1;

Making the counter volatile won’t suffice to ensure all increments are recorded.

Point 3 holds, but requires some clarification of the underlying assumptions (Answers to my stackoverflow question failed be clear cut and authoritative). Let us consider the situation when multiple threads manipulate an object X that is plain data, but alternate their temporal execution (so that X is never accessed concurrently) via another object Y that rely on concurrency control (wait, notify, synchronize). Should fields of object X be volatile or not?

If we assume that threads are synchronized using at least one the concurrency control primitive — an not just temporal alternance thanks, say, to a well-time sleep statements — this implies the alternate acquisition of at least a lock m, as for instance in:

while(true) {
    synchronized(m) { wait(); }
    counter = counter + 1;
    synchronized(m) { notify(); }
}

There is a happens-before dependency between the increment statement and the release of the wait lock. By construction, when the lock is released, exactly one thread will acquire it and wait again. So exactly one increment will happen-after the release of the wait lock. By consequence the increment is guaranteed to see only the value of the previous increment–no need of volatile.

More