Tuesday, June 26, 2012

Update on JPA 2.1 and fetch control

UPDATE: JSR388 has been released with support for "fetch groups", which at first glance seem to meet this need.

I re-posted my query to javaee-spec-users after seeing no response on the jpa-spec users list. A reply from Linda DeMichiel on the javaee-spec list looks promising.

Separate discussion (twitter) suggests it's conference season so everyone's busy and sidetracked as well.

The spec proposal does indeed list:
Support for the use of "fetch groups" and/or "fetch plans" to provide further control over data that is fetched, detached, copied, and/or used in merging.
but I'd seen no discussion of it on the EG list, which I've been monitoring since near start. (UPDATE: There'd been a wee bit, I just missed it.) I didn't realise they were working to an ordered, specific agenda, though it certainly makes sense to tackle such a big task in small chunks like that.

The JPA 2.1 spec proposal also has some other good stuff. Points of particular interest to me include:
  • Methods for dirty detection
  • Additional event listeners and callback methods; availability of entity manager to callbacks
Dirty detection in particular has caused me a lot of pain in the past, so it's good to see it on the list. It'd simplify quite a few tasks around working with JPA, particularly in apps with longer-lived sessions like interactive GUI / swing apps, Vaadin web apps, etc.

One thing that isn't there that perhaps should be is handling failure more cleanly. Currently one needs to clone entity graphs before attempting to persist them using a tool like org.hibernate.util.SerializationHelper, because they may be in an inconsistent state if a persist operation fails. It's slow and ugly to need to serialize/deserialize to clone an entity graph, but hard to get around because dynamic weaving and enrichment means you can't properly clone an entity just by allocating a new one and copying properties into it. Worse, you have to handle all the cascades yourself. Many common use cases involve situations where failure is normal: serialization failures of serializable transactions, optimistic locking issues, etc. Often you want to re-fetch the entities and re-do the work from scratch, but that's painful if you have user-modified state in those entities and all you want to do is retry a merge after a database serialization failure.

Guess I'd better pipe up about that too. It isn't news, but if it isn't on the agenda maybe it should be added for discusson.

Now is the time to join the jpa-spec users list on http://java.net/projects/jpa-spec and pipe up about practical, real-world issues you face with JPA. Even if everyone's out conferencing, email queues up nicely.

Monday, June 25, 2012

Updated AS7 <-> EclipseLink integration

I've pushed an update to the EclipseLink <-> AS7 integration library github.com/ringerc/as7-eclipselink-integration.

Version 1.1 now produces a proper JBoss AS 7 persistence provider integration module. It'll automatically inject the right properties into EclipseLink, so you don't need to modify persistence.xml or set system properties anymore.

I haven't found a proper solution to the null static metamodel issue or the dynamic weaving issues. Rich's code for VFS integration plus the logging integration code and the rest all just works automagically now, though.

I'd like to add better integration tests, but I'm being held back by today's JBoss AS 7 issue, https://issues.jboss.org/browse/AS7-3955 .

NOTE: This version of the integration code doesn't seem to work on the latest AS nightly, but it's fine on 7.1.1.Final.

Hope this is useful.

Saturday, June 23, 2012

Mail to the JPA 2.1 Expert Group re fetch control

I'm setting up another opportunity to look foolish - which I view as a good thing; if I don't risk looking foolish, I don't learn nearly as much.

I've mailed the JPA 2.1 EG re control over fetch mode and strategy on a per-property, per query basis, as I'm concerned this may not otherwise be considered for JPA 2.1 and thus Java EE 7. As previously expressed here I think it's a big pain point in Java EE development.

I'd appreciate your support if you've found JPA 2's control over eager vs lazy fetching limiting and challenging in your projects. Please post on the JPA 2.1 users mailing list.


UPDATE:: JSR388 has been released with support for "fetch graphs", a feature that appears to meet the specified needs. I'm no longer working with Java EE or JPA, so I haven't tested it out.

UPDATE: http://blog.ringerc.id.au/2012/06/update-on-jpa-21-and-fetch-control.html

UPDATE: Many of these problems are solved, albeit in non-standard and non-portalbe ways, by EclipseLink. EclipseLink extensions can be used to gain much greater control over fetches at the entity definition level and more importantly via powerful per-query hints. Glassfish uses EclipseLink by default, but you can use EclipseLink instead of Hibernate as your persistence provider in JBoss AS 7.

Friday, June 22, 2012

Getting EclipseLink to play well on JBoss AS 7

It's currently a bit tricky to get EclipseLink to work smoothly on JBoss AS 7. There's a good write-up on it by Rich DiCroce on the JBoss Community site that I won't repeat; go read it if you want to use EclipseLink with AS7.

I've packaged Rich's VFS integration code, some JBoss loging integration code, and a platform adapter needed for older versions of EclipseLink into a a simple as7-eclipselink-integration library that can be included directly in projects or bundled in the EclipseLink module installed in AS7.

The library build produces a ready-to-install AS7 module for EclipseLink with the integration helper code pre-installed.

It should simplify doing the integration work on a project.

Thursday, June 21, 2012

NullPointerException on @Inject site even with beans.xml?

Are you staring at a stack trace from a NullPointerException thrown by access to a CDI-injected site?

Have you checked to make sure beans.xml is in the right place:


Still not working? Does a message about activating CDI appear in the server log, but injection still not work?

Make sure you used the right @Inject annotation. Especially with code completion and dependencies pulling Google's Guice onto the classpath it's easy to land up accidentally choosing com.google.inject.Inject instead of javax.inject.Inject.

Yep, I just wasted twenty minutes staring at code that wasn't working before I noticed this. Time to find out which dependency is pulling in Guice, add an exclusion, and nag them to make it an <optional>true</optional> dependency. It looks like in my case it's coming from org.jboss.shrinkwrap.resolver:shrinkwrap-resolver-impl-maven, so a quick:

<exclusion>
<groupId>org.sonatype.sisu</groupId>
<artifactId>sisu-inject-plexus</artifactId>
</exclusion>

ensured that mistake wouldn't happen again.

JPA 2.1 will support CDI Injection in EntityListener - in Java EE 7

It's official, JPA 2.1 in Java EE 7 will support injection into EntityListener. I've checked the JPA 2.1 spec, and in draft 3 the revision notes state that:

Added support for use of CDI injection in entity listeners. Added requirement for Java EE container to pass reference to BeanManager on createContainerEntityManagerFactory call.

This is good news. The spec needs more eyes. If you use JPA, you need to have a look. Check to see if any pain points you encounter in JPA 2 are addressed. Check the revision notes at the end and see if something jumps out at you as being problematic or unsafe. If you have concerns contact the EG. More eyes on draft standards means fewer problems in released standards. EclipseLink has already implemented JPA 2.1 Arithmetic expressions with Sub-Queries, option support for select and from clause subqueries and JPA 2.1 JPQL Generic Function support so go try them out in EclipseLink 2.4.0 pre-releases using the latest milestone.

It might be worth trying out the new goodies too. Sometimes a feature seems reasonable on paper, but works very poorly in practice. The spec-before-implementation development model of Java EE means we get to "enjoy" lots of those issues; help prevent more by testing the implementation early.

I didn't see anything about fetch strategies and modes, so I'm going to have to go nag them again. I had success the first time around in pushing for CDI injection into EntityListener, but the spec is significantly further along now. OTOH, the difficulty of controlling eager vs lazy properties on a per-query basis is a big pain point.

UPDATE: done, let's see what happens.

(BTW, it seems a shame that JPA 2.1 is being tied to EE 7, as that's still a long way off.)

JPA2 is very inflexible with eager/lazy fetching

This must be a common situation with JPA2, but seems poorly catered for. I feel like I must be missing something obvious. It's amazingly hard to override lazy fetching of properties on a per-query basis, forcing eager fetching of normally lazily fetched properties.

There doesn't appear be any standard API to control whether fetching of a given relationship or property is done eagerly or lazily in a particular query. Nor is there a standard hint for setHint(...) - EclipseLink offers some limited control, Hibernate has no hint equivalent to its own Criteria.setFetchMode( ). JPQL/HQL's left join fetch and the (somewhat broken) Criteria equivalent scale very poorly to more than a couple of properties or to nested lazy properties, and don't permit other fetch strategies like subselect or SELECT fetching to be used.

Tell me I'm wrong and there's some facility in JPA for this.

Read on before replying with "just use left join fetch". I wish it were that simple, and not just because a join fetch isn't always an appropriate strategy.

Tuesday, June 19, 2012

JBAS011440: Can't find a persistence unit named null in deployment

Encountering the deployment error "JBAS011440: Can't find a persistence unit named null in deployment" on JBoss AS 7? Using Arquillian?

You probably put your persistence.xml in the wrong place in your archive. Try printing the archive contents.

Friday, June 15, 2012

Lenovo "1802: Unauthorized Network Card" update

Progress made with Lenovo; see my update to the original post on this topic.

Overall, mixed results. Quick and positive immediate response followed by a quick return to talking to a brick wall. We'll see.

The website hasn't been updated yet, but it's only been a couple of days and it is a big website. I'd be amazed if it had been.

Monday, June 11, 2012

Lenovo sales/support can't do their jobs if Lenovo don't tell them the facts

I just bought a Lenovo T420. Of course, the T430 promptly came out, but that's not why I'm very angry at Lenovo. This is:


1802: Unauthorized Network Card is plugged in - Power off  and remove the miniPCI network card (413C/8138).

System is halted


UPDATE 2012-11-17: It's been six months, and I'm out of patience. I'm filing an ACC complaint..


UPDATE 2012-06-24: I've received the replacement 3G card from Lenovo, and it works well. The Gobi 3000 based Sierra MC8790 card is a bit ... challenging ... under Fedora 17 and I wish I'd just been able to use my existing, working card. Still, this one should be lots faster when it's working, and it works under Windows at least.

They still haven't updated their website, so it's going to be nagging-time soon. Not impressed.


UPDATE 2012-05-15: After some help by lead_org on the Lenovo forums I was able to get in direct contact with Lenovo ANZ's customer care manager, customer care team lead, and social media managers. An amazing three hours after my initial email, I received a helpful reply. I must paraphrase as I don't have the writer's explicit permission to quote the full message; it indicated that:

  • This was the first time the Australia/New Zealand group had seen this issue
  • They were sorry for the disruption and would courier me a compatible 3G card
  • The website would be fixed to inform potential customers of the restriction, and would educate their sales/support staff about it
  • (I'll quote this bit; it's a formula match for the line I see everywhere else on this topic):
    With regards to the whitelist, it has existed since the introduction of modular cards - Lenovo have to qualify particular wireless devices not just internally but with the FCC and other telecommunications bodies around the world, hence the control around this list exists. As such there is no way to bypass this list as it would effectively allow violation of legal statutes with regards to telecommuncations devices.

Kudos for a quick response, mea culpa, action to stop it happening in future and a gesture to make it right for me. Good on them.

I'm less thrilled by the vague and waffly detail-free explanation of the whitelist; I've been unable to find anywhere where Lenovo or a Lenovo rep has clearly stated which regulations affect them, but not Dell, Acer, or numerous small vendors. I asked for clarification on this point but have received no reply to date.

The important thing is that they'll fix their website and rep training to make sure others aren't mislead; in the end, their reason for the whitelist can be "because we can and because we feel like it" and so long as they're up front about it that's less bad.

Related writings by others:


Note the common theme: complete surprise that their laptop refuses to work, because they had no warning about the lockdown until they already had hardware for it. Not cool.


That error informs me that Lenovo has chosen not to permit me to use my 3G cellular modem card in my laptop. I'm not leasing this thing off them, I bought it outright. This isn't a technical limitation or restriction, it's a policy decision enforced by the system's EFI firmware or BIOS.

Before I bought this laptop, I'd heard that Lenovo and IBM before them used to restrict installed Mini-PCI-E and Mini-PCI cards in BIOS, refusing to boot if a non-Lenovo-branded 3G or WiFi card was installed. I had a 3G card I wanted to use, and anyway didn't want to be part of that sort of thing, so I called to confirm they didn't still do that.

The sales rep assured me in no uncertain terms that there is no such restriction. I was promised that a Lenovo T420 will boot up fine with a 3rd party 3G or WiFi card installed, though of course they can't guarantee the card will actually function, I'd need drivers, and they won't provide tech support for it. Fine.

I didn't completely trust the sales person - though they knew what MiniPCI-E was, so they were ahead of the curve - and called support. I asked them the same thing: Would the laptop refuse to boot and give me an error in BIOS if I put a non-Lenovo 3G or WiFi card in. I specifically asked about whether it'd give me a BIOS error and refuse to boot. Not only did they assure me it wouldn't, but they said the card would work out of the box if it were the same model as one of the ones Lenovo sells. No misunderstandings here, we weren't talking about USB stick 3G, but mini-pci-E.

At this point, let me propose to you a game. Be nice, though, it isn't the fault of the sales and support reps, they're getting mislead and misinformed to as well. Go to shopap.lenovo.com or your local Lenovo online store, or call their sales number for your region. Ask the poor innocent who is just trying to help you whether you can use a 3G card from your old Dell laptop in the T420 you're thinking of buying, since it's a mini-pci-E card and you have all the drivers. Help them; mention that you've heard that Lenovo and IBM before them used to stop machines starting up if there was a non-Lenovo wireless card in them. Ask them if that's still the case. Copy and paste your chat to the comments if you feel like it, but be very sure to blank out the rep's name or I'll get really angry at you too.

Betcha they'll tell you everything will be fine. Mine did and I have a record of it.

It seems Lenovo's corporate decision makers don't tell its sales and support reps everything they need to know:



That's from my new Lenovo T420, a great machine except for the whole locked-down-so-you-can-only-use-our-branded-hardware thing. (OK, and the lack of USB3/bluetooth4, but we can't have everything).
Of course, it's not surprising the sales and support folks don't know since the issue isn't documented anywhere in the tech specs, so-called datasheet (sales brochure) for the current model T430, or the T420/520(US) or T420/520(AU) etc. here they advise you to select a "wireless upgradable model" but don't bother to mention the card'd better be from them, Or Else.

It isn't even in the hardware maintenance manual! Points for Lenovo for publishing this, unlike most vendors, but it clearly needs to be a wee bit more complete.

There's even a knowledge base article about it, but you have to know the error code to search for before you can find it. The ThinkWiki article on it wasn't obvious either, isn't on Lenovo's site, and isn't something you're likely to find until you've already been burned.

The only official mention I found outside that knowledge base article was a tiny footnote on the product guide for resellers, where it says on page 160 (not kidding) regarding 802.11 WiFi:
Based on IEEE 802.11a and 802.11b. A
customer can not upgrade to a different
wireless Mini PCI adapter due to FCC
regulations. Security screws
prevent removal of this adapter. This wireless
LAN product has been designed
to permit legal operation world-wide in
regions which it is approved. This product
has been tested and certifi ed to be
interoperable by the Wireless Ethernet
Compatibility Alliance and is authorized
to carry the Wi-Fi logo.
and on 161 regarding "Wireless upgradable" 3G/4G (terms "3G" and "4G" not actually used in document):
Wireless upgradable allows the system
to be wireless-enabled with an optional
wireless Mini PCI or Mini PCI Express
card. Designed to operate only with
Lenovo options.
None of this handy information was, of course, on the store page, laptop brochure, or specs where customers might actually see it, and was unknown to their own sales and support reps, who've been kept completely in the dark. Nor does it say, even in the above sales-rep-info footnote, "firmware will disable the laptop if 3rd party card detected"; it says it's only designed to work with Lenovo parts, not that it'll actively and aggressively stop you you using non-Lenovo ones.

Not happy. Their sales chat people on the website assuring me that the sky is purple and full of ponies is one thing, but their support folks not knowing this is something else entirely. It's not their support folks' fault their employer is lying to them, but it most certainly is the fault of Lenovo management for enforcing this policy and keeping it secret from the support and sales teams.

What makes me angriest is that the website for 3G capable models doesn't say anything about this restriction, nor is there any link about it regarding WiFi cards, or any mention in the specs for the MiniPCI-E slots or antennae, or ANYWHERE.



"Datasheet" excerpt specs from the T420. No sneaky little asterisks. No small text elsewhere.



The most important lesson in Java EE

The single most important thing I've learned about Java EE while working with it for the last two painful years is:

UPDATE: ... is not to post something significant that relates to the work of people who you respect when very angry about something completely unrelated. Especially after a bad night's sleep and when rapidly approaching burnout at work. Except, of course, that I apparently haven't learned that.

I'll leave the following post intact and follow up soon (after sleep!), rather than edit it in-place. I know it's an unfair and unreasonable post in many ways, but I guess it's also thought provoking if nothing else. If you read on, be sure to read Markus Eisele's response, where he makes some good and important points.

A proper follow-up is to come. The original post:


"Final" or "Released" doesn't mean "Ready", only "Public Beta"

I suspect people are only now starting to use Java EE 6 for real production apps. There's been tons of hype and fire, but so many of the technologies have been so amazingly buggy I'm amazed anybody could build much more than a Hello World with them until recently.

When a new Java library or technology is released, resist temptation. It's not like a PostgreSQL release where you can immediately start working on it. It isn't finished. It isn't documented. It isn't ready. Ignore the hype and wait three to six months before going anywhere near it unless you want to be an unofficial beta tester and spend all your time reporting bugs. Like I have. Over, and over, and over again.

I haven't seen a single product released in a release-worthy state yet. I've seen a hell of a lot released quite broken:

  • The whole CDI programming model was broken for the first six months to a year of Java EE 6's life, on both JBoss AS and Glassfish. Weld (the CDI RI) took at least six months after "final" release before the most obvious and severe bugs were ironed out, and was the cause of many of the worst bugs in Glassfish 3.x.
  • Glassfish 3.0 was unusably buggy
  • Glassfish 3.1 was still severely buggy especially with CDI projects until at least 3.1.1 + patches
  • JBoss AS 7.0.0 was missing whole subsystems and didn't become usable until 7.1.1.Final, though it's FANTASTIC now
  • Arquillian 1.0.0.Final wasn't really baked yet though at least it worked amazingly well once the deficiences were worked around.
  • Mojarra is IMO only barely useful now, two years after release
  • RichFaces 4.2.x was still biting me with bugs whenever I tried to do anything with it. Unicode bugs, CDATA/escaping bugs, lifecycle bugs, you name it.
  • Seam 3 was released as "3.0.0" when only some of the modules worked, and those only for JBoss AS. A year after release it's pretty solid, but if you tried to use it in the first few weeks or months like I did you would've suffered - esp if you tried using it on Glassfish.

Seriously, be ultra-conservative if you value your productivity.

Wednesday, June 6, 2012

Why don't we have Target Disk Mode for non-Apple machines?

I'm not an Apple fan, but there's one thing that consistently makes me really jealous about their hardware.

Despite generally scary-buggy EFI firmwares in their Intel CPU based machines, Apple's firmwares support what they call Target Disk Mode. This is a tech support and service dream, and has been supported since Apple moved over to "New World" PowerPC machines with their Forth based OpenFirmware, ie for a very long time.

Target disk mode is great for data recovery, OS repairs, OS reinstalls, disk imaging, backups, accessing data on laptops with broken displays without having to rip the HDD out of them, and lots else. It's just great. Unless you have an Apple you don't get it, and there's no longer any good reason for that.

UPDATE July 2012: Kernel Newbies reports that support for exporting SCSI over USB and FireWire has been merged into the Linux kernel. This will make it much easier to produce bootable USB keys that export the host system's hard drives. Unfortunately it's useless with most machines as it appears to require a USB gadget or OTG port, or a FireWire port.

Tuesday, June 5, 2012

Be careful if using Vaadin with Maven

As I'm getting more and more sick of bugs in JSF2 and RichFaces 4 and making painfully little headway, I thought I'd give Vaadin a go with a side project I'm fiddling with.

First impression, three hours in: The Vaadin developers don't use or get Maven, and while there's official Vaadin support for Maven their support is only superficial. The Vaadin Eclipse plugin only kind-of works with m2eclipse, scattering generated files throughout the src/ tree, putting library jars in src/main/webapp/WEB-INF/lib, adding jars directly to the Eclipse build path for the project, etc. It's not really a good maven citizen and needs plenty of encouragement to do the right thing, though it does seem to work once you've undone all the damage it does when enabled on a Maven project.

To clean up, you seem to need to:

  • Enable the Vaadin facet in your project properties Project Facets section. Do not click on "Further configuration available..." or, if you do click it, make sure to uncheck "Create project template..." if you already have the vaadin servlet in your web.xml.


  • Check web.xml to make sure the Vaadin facet hasn't added a second copy of the vaadin servlet when it was enabled.

  • Remove src/main/webapp/WEB-INF/lib and instead add a dependency on com.vaadin:vaadin:6.7.1

  • Also add a dependency on com.google.gwt:gwt-user:2.3.0 because the vaadin artifact pom doesn't declare the dependency

  • In your project properties, under java build path, remove the VAADIN_ entries from the Libraries tab. Otherwise you'll have those on the build path as well as the Maven-provided dependencies, which could get mismatched-versions-tastic in a hurry.

  • Add src/main/webapp/VAADIN to your .gitignore; it contains generated code that's being dumped there not in target/.

In my first three hours with Vaadin, I reported seven bugs, though only three are more than cosmetic/trivial and are reproducible. Not a reassuring start, but I'm going to give it a chance because they're all related to Maven integration in the Eclipse plugin and to slightly dodgy artifact packaging. They might make a great tool and just not get Maven.