Glassfish mysteries #2: distributed transactions

Here are all posts of this serie on Glassfish.

This second post about Glassfish mysteries will be about transaction management. There is indeed some strange behaviour when usage scenarios differ from traditional Web-EJB-JPA examples.

Transaction is not rolled back

Depending on the way you package your enterprise application, the annotation @ApplicationException(rollback=true) will not be considered. This can be a very serious bug. A detailed explanation about the packaging scenario that fails can be found in the reference at the end of this post. As a workaround, the transaction can be declared in the ejb-jar.xml in which case it will be processed correctly. Lesson learned: always double check the xml generated by Glassfish during deployment (in domains/domain/generated) to verify if is matches the intended behaviour.

UserTransaction must be a singleton

Glassfish supports client-side transaction demarcation. This is part of the “gray” zone of the J2EE specification in the sense that it is not mandatory but most containers support it. The object that is used by the client to control the transaction boundaries is the UserTransaction. The UserTransaction exposes the method begin(), commit() and rollback(). A transaction is implicitly bound to the current thread. The client can not perform multi-threaded transactions, neither suspend/resume the current one.
The JTA specifications are not particularly clear regarding the thread-safety of the UserTransaction object: can the same UserTransaction be used by several threads, or should each to possess its own UserTransaction? In the case of Glassfish, the answer is even more radical: there should be one and only one UserTransaction object used per client JVM. In other words, the UserTransaction must be managed like a singleton. If you have several instances of UserTransaction then you application will apparently work, but the ACID properties of the transactions are not enforced. This means (1) concurrent clients may read uncommitted read (2) rollback will not work properly. You find at the end of this post a reference to this bug I reported on java.net. There is test case attached to the post.

TopLink hangs with client-side transactions demarcations

As I wrote in the previous section, client-side distributed transactions are part of the “gray” zone of the J2EE specification. Glassfish’s transaction manager does support client-side transaction demarcation, but unfortunately TopLink doesn’t. As a consequence, when the client attempts to commit the transaction, the system hangs. This can probably be explained by the fact that TopLink has been developed by Oracle, and the OC4J doesn’t support client-side transaction demarcation at all.  Switching to Hibernate 3 (which is very easy) solves the problem.

Allow non-component callers

We had a very complex scenario in our system and the distributed transaction contained several XA participants including database, JMS, and custom JCA connector. The transaction was started from the client-side. We were experiencing lots of stability issue, with some transaction failing randomly with low-level error messages such as “can not delist participant”, “got -1 from a read call”, etc. We noticed then that enabling the option “allow non-component callers” in the datasource configuration has a significant positive impact. Given that the definition of this option is extremely obscure (see reference at the end), I don’t know when this options should be enabled or not. Maybe it is related to the usage of Hibernate 3 also. However, it seems like that in complex transaction scenario, it definitively helps.

References
http://forums.java.net/jive/thread.jspa?messageID=319223&#319223
http://forums.java.net/jive/thread.jspa?messageID=252496&#252496
http://forums.java.net/jive/message.jspa?messageID=246736
http://docs.sun.com/app/docs/doc/820-4496/gavro?a=view

Glassfish mysteries #1: JavaMail

Here are all posts of this serie on Glassfish.

This serie of post will cover some problems we experienced with Glassfish. Let’s start with a few easy ones related to JavaMail. These one are not blocking but rather annoying.

Lookup from JNDI

There’s a bug in Glassfish v2ur2 that prevent you from getting the JavaMail session directly. You will need to use the following code

Context ic = new InitialContext();
MailConfiguration conf = (MailConfiguration)
ic.lookup("mail/notification");
Session session =
javax.mail.Session.getInstance( conf.getMailProperties(),null );

Custom properties

There are some obscure rules to follow if you plan to add custom properties in your JNDI mail entry. We can read in the Glassfish documentation: “”Every property name must start with a mail- prefix. The Application Server changes the dash (-) character to a period (.) in the name of the property, then saves the property to the MailConfiguration and JavaMail Session objects. If the name of the property doesn’t start with mail-, the property is ignored.”

SMTP + authentication

There are no standard properties to deal with SMTP authentication. If you need to support authentication you will need to rely on custom properties. Here is the code that we’ve been using:


String auth = session.getProperty("mail.smtp.auth"); //
String pwd = session.getProperty("mail.smtp.password");

if ((auth != null) && new Boolean(auth))
{
Transport transport = session.getTransport("smtp");
transport.connect(conf.getMailHost(), conf.getUsername(), pwd);
msg.saveChanges();
transport.sendMessage(msg, msg.getAllRecipients());
transport.close();
}
else
{
Transport.send(msg);
}

References

https://glassfish.dev.java.net/javaee5/docs/AREF/abhaq.html
http://forums.java.net/jive/thread.jspa?messageID=264233

Sub-optimal Pagination with Oracle & Hibernate

There seem to be a bug in Hibernate 3 that results in a sub-optimal query if one attempts to fetch one specific portion of the result set, as is typically the case with pagination.

The best-practice to extract one specific page out of the complete result set is to use the ROWNUM keyword with Oracle. The ROWNUM keyword is used to truncate the size of the result set after N items. Oracle will optimize the usage of ROWNUM which can result in drastical improvement of some queries. The outline of a paginated query looks like following:

SELECT *
FROM (SELECT row_.*, ROWNUM rownum_
FROM (SELECT this_.ID AS id3_0_, this_.VERSION AS version3_0_,
this_.NAME AS name3_0_, this_.TYPE AS type3_0_,
this_.marketstatus AS marketst5_3_0_
FROM customer this_
ORDER BY this_.ID ASC) row_
WHERE ROWNUM <= ?)
WHERE rownum_ > ?

There seem unfortunately that there is a bug in Hibernate, and the query that is generated will be:

SELECT *
FROM (SELECT row_.*, ROWNUM rownum_
FROM (SELECT   this_.ID AS id3_0_, this_.VERSION AS version3_0_,
this_.NAME AS name3_0_, this_.TYPE AS type3_0_,
this_.marketstatus AS marketst5_3_0_
FROM customer this_
ORDER BY this_.ID ASC) row_)
WHERE rownum_ <= ? AND rownum_ > ?

Though they are semantically the same, the second one will not benefit from Oracle’s optimizations.

Here is the corresponding issue I’ve reported. More information can be found about Oracle ROWNUM in AskTom

Threat modeling: tools in practice

We’ve investigated two tools for our threat model. Here is an overview of both tools (from Microsoft) and our experience with them.

Threat Modeling Tool

The first tool supports system modelling with the definition of Entry Points, Trust Levels, Protected Resources, plus some general background information. Data Flows can be authored directory with the tool or imported from Visio. The tool main strength comes however from its Threat Tree modelling features. Threats can be identified and decomposed into a series of step following the same approach as Attack Tree. The tool supports AND and OR semantics when constructing advanced threat tree. E.g.: “Disclose password” can be achieved with either “Brute force password” or “Use default password”. Each step in the threat tree can be rated according to DREAD and mitigation notes can be added.

Microsoft Threat Analysis & Modeling

The second tool supports system modelling – called application decomposition in this case – with the definition of Roles, Data, Components and External dependencies. The traditional data flow are replaced with “application use cases” that can be modelled right form the tool using the predefined components, data and roles. The threat identification and decomposition follows the threat-attack-vulnerability-countermeasures model. The tool comes with existing base of attacks and vulnerability. E.g.:  Attack “SQL injection” is made possible because of “Usage of Dynamic SQL” vulnerability. Threats that have been identified can then be categorized using the CIA (confidentiality, integrity, availability) classification. Threats are not related to attacks or vulnerability but directly to countermeasures. This was a bit awkward to me and I don’t understand exactly the rationale behind this choice. It’s also interesting to note the existence of so-called “Relevancies” that can then be linked to attacks. E.g.: relevancy “Component utilizes HTTP” is linked to attack “Response Splitting”, “Session Hijacking”, and “Repudiation Attack”.
The tool’s ambitions are much bigger and go beyond a simple documentation of the system security. The tool contains analytics and visualization features. Analytics features aim at assessing the system automatically (e.g.: “Data access control matrix”), whereas visualization features help to understand the system from different view points (e.g.: “call flow”, “data flow”, etc.). These features make sense only if the complete system was modelled with the tool, especially the application use cases.
The tool is actively maintained and significant efforts have been invested in it in to promote the Threat Modelling practice.

Conclusion

Threat modelling is not easy. Several approaches can be used and it can be a time consuming activity. We didn’t model the system enough in detail to leverage the analytics and visualization features of the Microsoft Threat Analysis & Modeling tool. We can then not assess the relevance of such an analysis. Speaking generally about the process, our main problem was to decide to which level of details we wanted to go, and then how to organize our findings in a meaningful way.

The basic notion of threats, attack, vulnerability and countermeasures can be vague or overlapping and they must first be defined in the scope of the analysis in a way to ensure a consistency of the threat model. Sample questions to ask prior to start the analysis are:

  •  What will be the granularity of the threats to identify? E.g.: “disclose customer information” vs. “disclose customer document”, “disclose customer number”, “disclose credit card number”. Knowing to which extent the threat analysis will be performed must be defined in advanced and will drive the complete process.
  • What will be the granularity and the nature of the attacks to identify? An attack can be a concrete (e.g.: “password brute force attack”) or abstract (e.g.: “denial of service attack”). In the first case, an attack is a concrete technique that can be applied to exploit vulnerability. In the later case, an attack is an abstraction of a set of existing techniques that are related. Such abstract attack could be refined into a list of concrete attacks (e.g.: “SYN flood”, “XML bomb”, etc.).
  • How to capture generic attack/threats? This again related to the question of granularity of the analysis. Consider for instance the attack “Submit HTTP form twice”. Because it can be applied on all web pages, the list of threats will be an exhaustive listing of the application features: “Place wrong order”, “Rate the item more than once”, etc. Such attack can be capture into generic threats such as “Abuse web application” or “Disrupt system”.  Similarly, all the “Denial of service attacks” will lead to generic threat “Degrade service availability”.

The threat model must no be an exhaustive, useless document. Therefore, the analyst must find a balance between generic attack and threat (and the corresponding generic hardening best practice) and more detailed attacks and threats related to the specificities of the system under study.

Threat modeling: overview

Threat Modelling is a process of assessing and documenting a system’s security risks. The threat model identifies and describes the set of possible attacks to your system, as well as mitigation strategies and countermeasures. Your security threat modelling efforts also enable your team to justify security features within a system, or security practices for using the system, to protect your corporate assets.

Any threat modelling process will usually encompass the following steps:

1) A model of the system that is relevant for the threat analysis
2) A model of the potential threats
3) A categorization and rating of the threats
4) A set of countermeasures and mitigation strategies

There are however several approaches to perform each of the steps. We will now briefly give an overview of each step.

Step 1: Model your system

The system model is an abstraction of your system that fits the threat analysis. It differs from other traditional models in the sense that it is a mix between a deployment view, a data view and a use case view.

The system entry & exit points The entry & exit points are the ways through which data enter and leave the system, from and to the external environment.
The actors & external dependencies The actors and the external dependencies are the entities that legitimately interact with a system. The actors tend to represent real user roles whereas an external dependency refers usually to a third-party system. The distinction between both can sometimes be blurry: an external system could be considered as an actor in case it is the active participant in the interaction.
The trust levels & boundaries Trust levels define the minimal access granted to an actor within a system. For example, a system administrator actor may have a trust level that allows them to modify files in a certain directory on a file server, while another user entity may be restricted from modifying files.
The assets An asset is an item of value, an item security efforts are designed to protect. It is usually the destruction or acquisition of assets that drives malicious intent. A collection of credit card numbers is a high-value asset, while a database that contains candy store inventory is probably a lower-value asset. An asset is sometimes called a protected resource.
Use cases Identify the use case for operating on that data that the application will facilitate.
The assumptions All assumptions that were driving the modelling effort. Considering the cryptographic algorithm either public or private is for instance an assumption worth being mentioned.

Step 2: Model your threats

Let’s first define some concepts:

Threat – The possibility of something bad happening
Attack -A mean though which a threat is realized
Vulnerability – A flaw in the product
Countermeasure – A mean to mitigate the vulnerability

« Threats are realized through attacks which can materialize through certain vulnerabilities if they have not been mitigated with appropriate countermeasures »

A concrete example would be:

Threat  – Perform arbitrary query on the system
Attack – Access internal service exposed to the end-user
Vulnerabilities – (1) Firewall misconfigured (2) Lack of access control
Countermeasure – (1) Correct firewall rules (2) Secure the EJB correctly

« Arbitrary query can be executed through access to the back-end EJB  which can be possible because of wrong firewall configuration and lack of access control if the infrastructure and the application server were not configured correctly »

An advanced attack is frequently composed of a series of preliminary attacks which will exploit several vulnerabilities. The attacks can be represented as a tree, called an attack tree.

The level of details in the identification of threats, attacks and vulnerabilities is up to the analyst.

Step 3: Categorize and rate your threats

Once the threats, attacks and vulnerabilities have been identified, the threats can be categorized. Popular categorization schemes are STRIDE or CIA.

STRIDE:

Spoofing – To illegally acquire confidential information of someone and use it
Tampering – To modify maliciously information that is stored, in transit or otherwise.
Repudiation – A malicious used denying the fact of committing an action that he/she is unauthorised to do or that hampers security of an organisation, and the system has no trace of such action. This action cannot be proved.
Information Disclosure – To view information that is not meant to be disclosed.
Denial of Service – Sending or directing network traffic to a host or network that it cannot handle thus they become unusable to others.
Elevation of privileges – To increase the adversary’s system trust level, permitting additional attacks.

CIA:

Confidentiality – To ensure that information is accessible only to those authorized to have access
Availability – The ratio of the total time a functional unit is capable of being used during a given interval
Integrity – To ensure that the data remain an accurate reflection of the universe of discourse it is modelling or representing, that no inconsistencies exists.

Once categorized, the threats can be rated according to the risk the represent. The total risk can evaluated according to DREAD:

DREAD

Damage Potential – Defines the amount of potential damage that an attack may cause if successfully executed.
Reproducibility– Defines the ease in which the attack can be executed and repeated.
Exploitability– Defines the skill level and resources required to successfully execute an attack.
Affected Users – Defines the number of valid user entities affected if the attack is successfully executed.
Discoverability– Defines how quickly and easily an occurrence of an attack can be identified.

If attack trees have been modelled, the risk can be estimated based on the likelihood each step in the attack tree.

Step 4: set up countermeasures and mitigation strategies

Once the threats, attacks and vulnerabilities have been identified and documented, a set of countermeasure can be set up. Such strategies aim at reducing the risk surface and mitigating the potential effects of an attack. If the existence of the threat can not be removed altogether, the probability of such threat should be reduced to an acceptable threshold.

Wrap up

The following quote summarize well the rationale behind thread modeling. « Threat modeling is not a magic process/tool where you just throw stuff in and out comes goodness. Threat modeling is a structured way of thinking about and addressing the risks to what you are about to build rather than going about it randomly. »

References 

http://blogs.msdn.com/threatmodeling/archive/2007/10/30/a-discussion-on-threat-modeling.aspx
http://blogs.msdn.com/krishnanr/archive/2004/10/27/248780.aspx
http://www.devx.com/security/Article/37502/0/page/4
http://www.schneier.com/paper-attacktrees-ddj-ft.html#rf2
http://www.agilemodeling.com/artifacts/securityThreatModel.htm 
https://martinfowler.com/articles/agile-threat-modelling.html

StAX pretty printer

Using StAX to write XML is a lot easier than either using DOM or SAX. There is however no option to indent the generated XML, unlike with SAX or DOM. When faced with this problem, I came out with a simple yet generic solution: I would intercept all write calls and preprend the necessary whitespace according to the current depth in the XML. To achieve this easily an InvocationHandler can be used that will decorate the XMLStreamWriter.

Here is a sample usage

XMLStreamWriter wstxWriter = null;
XMLStreamWriter prettyPrintWriter = null;
ByteArrayOutputStream baos = new ByteArrayOutputStream();

wstxWriter = factory.createXMLStreamWriter(baos, "UTF-8"); // specify encoding

// Wrap with pretty print proxy
PrettyPrintHandler handler = new PrettyPrintHandler( wstxWriter );
prettyPrintWriter = (XMLStreamWriter) Proxy.newProxyInstance(
XMLStreamWriter.class.getClassLoader(),
new Class[]{XMLStreamWriter.class},
handler );

prettyPrintWriter.writeStartDocument();

And the InvocationHandler looks like this (see this gist):

public class PrettyPrintHandler implements InvocationHandler {

private static Logger LOGGER = Logger.getLogger(PrettyPrintHandler.class.getName());
private final XMLStreamWriter target;
private int depth = 0;
private final Map<Integer, Boolean> hasChildElement = new HashMap<Integer, Boolean>();
private static final String INDENT_CHAR = " ";
private static final String LINEFEED_CHAR = "\n";

public PrettyPrintHandler(XMLStreamWriter target) {
this.target = target;
}

public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
String m = method.getName();
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("XML event: " + m);
}
// Needs to be BEFORE the actual event, so that for instance the
// sequence writeStartElem, writeAttr, writeStartElem, writeEndElem, writeEndElem
// is correctly handled
if ("writeStartElement".equals(m)) {
// update state of parent node
if (depth > 0) {
hasChildElement.put(depth - 1, true);
}
// reset state of current node
hasChildElement.put(depth, false);
// indent for current depth
target.writeCharacters(LINEFEED_CHAR);
target.writeCharacters(repeat(depth, INDENT_CHAR));
depth++;
}
else if ("writeEndElement".equals(m)) {
depth--;
if (hasChildElement.get(depth) == true) {
target.writeCharacters(LINEFEED_CHAR);
target.writeCharacters(repeat(depth, INDENT_CHAR));
}
}
else if ("writeEmptyElement".equals(m)) {
// update state of parent node
if (depth > 0) {
hasChildElement.put(depth - 1, true);
}
// indent for current depth
target.writeCharacters(LINEFEED_CHAR);
target.writeCharacters(repeat(depth, INDENT_CHAR));
}
method.invoke(target, args);
return null;
}

private String repeat(int d, String s) {
String _s = "";
while (d-- > 0) {
_s += s;
}
return _s;
}
}

The repeat method is quite ugly. You can use StringUtil form commons-lang instead of check one of the other repeat implementation on stackoverflow.

JCA connector: file system adapter

I’ve been working on a sample JCA connector that can be used to access to the file system. The connector is pseudo-transactional: it is transacted but not very robust. It correctly creates and deletes files if the transaction is committed or rolled back. However, the connectors doesn’t keep the information in a transaction log, so if the system crash, the file system may be in an inconsistent state. The file system may also be in an inconsistent state, in the rare case of the transaction being rolled back after the first phase of the 2PC protocol.

The code is available on github: txfs

JCA connector: overview

The J2EE stack is composed of several containers, among which the JCA container. Not as popular as the EJB or Web containers, the JCA container is nevertheless a very important piece of the J2EE stack.  This container contains the “glue” that provides transparent transactional connectivity to third-party system. Remember that from point of view of the Web and EJB containers, the database or the JMS brokers are third-party systems. So you probably already used JCA connectors without even noticing it.
A JCA connector is an adapter between the J2EE world and the outside world. The J2EE world is indeed a managed environment that provides many services such as transaction management, connection pooling, thread management, lifecycle events, configuration injection or declarative security.  If you want to leverage these features while connecting to your third party system, JCA is the way to go.

Connectors are bi-directional. They can be used to connector to the outside, or from the outside. In the later case, we speak about connection inflow. We will discuss here only the first case: outside connectivity.

Anatomy of a JCA connector

Managed connection A managed connection represents a physical connection to the third party system. Managed connections are pooled and are enlisted/delisted in the global transactions automatically by the application server.
Connection handle The client (e.g.: EJB, or Servlet) does not manipulate the managed connection directly. It uses instead a connection handle which exposes only the subset of available operations that are relevant to a client; it is the client-side view of the managed connection.

Several handles can have the same underlying managed connection. This could happen for instance if the client obtains two connection handles for a given third party system while in the same transaction; the application server can optimize this case and re-use the same managed connection.

Connection request info The information that identifies the target third party system (e.g.: database connection string). The connector’s connection pool can contain connections to different target systems. The connection request info must then be checked to ensure the connection targets the desired external system.
Managed connection factory The managed connection factory has two purposes:

(1) Create brand new managed connection according to the desired connection request info

(2) Check if some managed connection can be re-used according to a given connection request info. This is refereed as connection matching.

The connection matching mechanism exists because the application server, which manages the connection pool, is not able to know to which target system a given connection points to. This is part of the connector’s logic.

Connection factory The client (e.g.: EJB, or Servlet) does not manipulate the managed connection factory directly. It uses instead a connection factory which is a façade that shields the client from the connector’s internal complexity. The connection factory typically contains a single method getConnection(…) which returns a connection handle to the client. The connection factory will interact with the application server and the managed connection factory to implement connection pooling correctly.
XAResource From the conceptual point of view, the managed connection is the participant that is enlisted/delisted in the global transaction. However, the managed connection is not forced to implement the XAResource interface itself, but only to expose a method getXAResource(…). An auxiliary object can then be returned that will act as an adapter between the JTA transaction manager and the managed connection.

Below is the component view:

JCAA complete sequence diagram

The various elements of a JCA connectors have now been presented. Below is the outline of a call to the Connection Factory’s getConnection() method.

JCA_sequence

1 Client creates the connection request info of the external system
3 Client calls getConnection( requestInfo) on the ConnectionFactory
3.1 ConnectionFactory calls allocateConnection( managedConnectionFactory, requestInfo) on the application server. The applicaton server exposes its functionalities through in the ConnectionManager interface. Connectors can therefore be implemented in a standard way without knowing the application server implementation.
3.1.1 The application server gathers a list of available (free) managed connection in the pool.Because the application server doesn’t know which managed connection points to the desired system, it calls matchConnection( listOfFreeConnection, requestInfo) on the managed connection factory.

The managed connection factory check if any of the provided connection indeed corresponds to the given request info. If there are none, it returns null.

3.1.3 No managed connection is available, or none of them match the request info. The application server need to allocate a new connection and calls createManagedConnection( requestInfo ) on the managed connection factory.
3.1.3.1 The managed connection factory creates a brand new managed connection and return it.
3.1.5 The application server gets the XAResource which belongs to the managed connection and enlist the connection in the distributed transaction. The application server put the managed connection in the pool.
3.2 The application server returns the managed connection to the connection factory.
3.3 The connection factory calls getConnection() on the managed connection and obtains a connection handle.
4 The connection factory return the connection handle to the client.

Web service and polymorphism

This page present several options to define a web service interface that uses polymorphism on list of objects. It compares various possible web service interface definition.

My preferred pattern is #5. It was used in one of our web service and works nice.

See also http://www.ibm.com/developerworks/webservices/library/ws-tip-xsdchoice.html

Wrapper with two collections

1
Name Wrapper with two collections
Description Declaration of two list to hold each kind of filters
Java definition
Class FilterClause
{
 @XmlElement(name="termFilter")
 List<TermFilter> termFilers;
 @XmlElement(name="fullTextFilter")
 List<FullTextFilter>   ftFilters;
}
Distinction between null or empty list Yes – if lists are empty, filterClause tag is still there
Expected SOAP request <filterClause>
<termFilter>
<name>edoc:Date</name>
<operator>=</operator><value>2005-10-10</value>
</termFilter>
</filterClause>
Pro
  • Simple
Cons
  • Not extensibility: a new subtype require a new list and to re-generate the stub on the client side
  • To add a new subtype, a new list must be added in the class
  • Not very Object-oriented

Wrapper with polymorphism on tag name

2
Name Wrapper with polymorphism on tag name (xsd :choice)
Description Declaration of one list with xsd:choice to distinguish the type of the elements and change the tag accordingly
Java definition
public class FilterClause
{ @XmlElements({
@XmlElement(name="termFilter", type=TermFilter.class),
@XmlElement(name="fullTextFilter",type=FullTextFilter.class)
})private Listfilter; }
Distinction between null or empty list Yes – if lists are empty, filterClause tag is still there
Expected SOAP request <filterClause>
<termFilter>
<name>edoc:Date</name>
<operator>=</operator><value>2005-10-10</value>
</termFilter>
</filterClause>
Pro
  • Extensibility to new subtype without the need to re-generate the stub on the client side
  • XML is easily readable
Cons
  • To add a new subtype  a new @XmlElement entry must be added manually

Wrapper with polymorphism with xsi:type

3
Name Wrapper with polymorphism with xsi:type
Description Usage of a wrapper with the declaration of one list using base type – polymorphism is detected automatically by JAXB & .NET
Java definition
public class FilterClause
{@XmlElement(name="filter")private Listfilters;}
Distinction between null or empty list Yes – if lists are empty, filterClause tag is still there
Expected SOAP request <filterClause>
<filter xmlns:q1=”http://www.imtf.com/hypersuite/hydra&#8221; xsi:type=”q1:termFilter”>
<name>edoc:Date</name>
<operator>=</operator>
<value>2005-10-10</value>
</filter>
</filterClause>
Pro
  • Extensibility to new subtype without the need to re-generate the stub on the client side
  • XML is easily readable because of the wrapper tag
Cons
  • To add a new subtype  a new @XmlElement entry must be added manually
  • Wrapper tag is useless except for readability

Polymorphic list on tag name

4
Name Polymorphic list on tag name (xsd :choice)
Description Same as 2 without wrapper
Java definition Can not be defined in Java, because @XmlElement can not be applied on the parameter of a method signature
Distinction between null or empty list No – if list is empty or null not tags are written
Expected SOAP request <termFilter>
<name>edoc:Date</name>
<operator>=</operator><value>2005-10-10</value>
</termFilter>
Pro See comment in Java definition
Cons See comment in Java definition

Polymorphic list with xsi:type

5
Name Polymorphic list with xsi:type
Description Same as 3 without wrapper
Java definition
 @XmlSeeAlso({FullTextFilter.class,TermFilter.class})
public class XXXX
{
@WebParam(name="filter")
List<Filter> filters;
Distinction between null or empty list No – if list is empty or null not tags are written
Expected SOAP request <filter xmlns:q1=”http://www.imtf.com/hypersuite/hydra&#8221; xsi:type=”q1:termFilter”>
<name>edoc:Date</name>
<operator>=</operator>
<value>2005-10-10</value>
</filter>
Pro
  • Extensibility to new subtype without the need to re-generate the stub on the client side
  • All subtypes can be defined in a secondary XSD that is imported in the WSDL
Cons
  • To add a new subtype  a new @XmlSeeAlso entry must be added manually

Fun with iTune Shuffle and Probabilities

I recently tagged and imported all my mp3 into iTune. I noticed then that there were lots of albums that I had only partially listened to and I decided to use the feature “Party Shuffle” to listen to my library randomly and eventually hear all the songs.

After a couple of weeks, I observed that some songs would reappear in the playlist and were picked twice. Over the weeks the frequency of “re-entry” songs increased with the direct consequence that new music was played less and less. Even though I had already realized that it would not be possible to hear all the songs with approach, I was still surprised by the “re-entry” rate, which I would have intuitively expected to be much lower.

I turned to probability to better understand the situation.

Let’s n be the size of my library. After t songs played randomly, the probability that a given song was played at least once is:

P( song played at least once ) = t / n.

Absolutely not! This probability can be computed with 1 – probability that the song was never played. This gives:

      P( song played at least once ) = 1 – (( n-1 )/ n)  ^ t

More generally, the probability of a song having been played x times is given by the function

P( x ) = (1/n)^x * ( (n-1) / n )^(t-x) * C ( n, x  )

Where C(n,x) is the number of possible permutation. The expanded

P( x ) = (1/n)^x * ( (n-1) / n )^(t-x) *  n! / (n-x) ! x!

Note that the probability that the song was never played (x=0) is still (( n-1 )/ n)  ^ t.

After t songs, the sum P(0) + P(1) + … + P(t) = 1, which proves that the formula is correct.

The average number of songs played in the library after t songs, can be computed with

Avg. played

 = n * P( song played at least once )

= n * ( 1 – ((n-1)/n)^t ) = n – (n-1)^t  / n^(t-1)

The “re-entry” rate, or the probability of hearing a new song can be computed with (n- avg. played) / n which is equivalent to the probability that a given song was never played P(x=0).

The graph bellows shows the probability that a song was never played for a library of 500 songs, after 0, 50, 100, etc. songs. It’s interesting to notice that the probability of new songs fall below 50% after about 300 songs.

iTune_probability

More:

Spotify noticed this as well 🙂

Spotify Updates Shuffle to Keep Your Playlists Feeling Fresh