Thursday, December 6, 2012

How to tell if a DLL is 32-bit (x86) or 64-bit (x64)

On Windows, there are several options for determining whether a given DLL is built for 32-bit or 64-bit CPUs.

You can open in in the wonderful dependency walker tool (kind of like ldd on steroids).

Alternately, Visual Studio (or the Windows SDK, either will do) include a dumpbin program that performs much the same job as objdump. You can use this tool to determine the architecture a DLL was built for.

Start a Windows SDK Command Prompt or Visual Studio command prompt (or run setenv / vcvars) then:

dumpbin /HEADERS thefile.dll | findstr 14C

eg:

D:\WinDev>dumpbin /HEADERS zlib1.dll|findstr 14C
             14C machine (x86)

Finally, install GNU File from GnuWin32, then:

D:\WinDev>"%PROGRAMFILES%\GnuWin32\bin\file.exe" zlib1.dll
zlib1.dll; PE32 executable for MS Windows (DLL) (GUI) Intel 80386 32-bit

Saturday, November 17, 2012

ACC complaint against Lenovo

Lenovo have failed to make good on their stated intention to amend their website to clearly show that "Mobile Broadband Ready" machines only accept whitelisted Lenovo cards after six months.

I'm writing an ACC complaint at the moment, alleging that they continue to sell devices with un-disclosed restrictions as a bait-and-switch tactic to force users to buy their marked-up 3G hardware.

I would welcome submissions from anyone else affected by this issue; contact me at the email address listed on the right hand bar in this blog, not via comments.

All Lenovo needs to do is fix its website to link "Mobile Broadband Ready" to a statement showing limitations, and to amend its product datasheets to show the PCI whitelist. This isn't a big thing to ask.

This is a purely private action and has nothing to do with my employer, past or present.

Tuesday, November 13, 2012

Joining 2nd Quadrant

I'm joining 2nd Quadrant, so I generally won't be posting new PostgreSQL entries here.

New PostgreSQL posts will appear on the 2nd Quadrant blog under my account there.

Thursday, November 1, 2012

Network silent (unattended) install of Google Chrome

I recently found myself with the need to push Google Chrome out across all machines in a WDS (Windows Deployment Services) group using MDT (Microsoft Deployment Tools) Deployment Workbench Task Sequences. I needed to do a silent, unattended install of Google Chrome from an offline installer. I still want it to auto-update.

This turns out to be badly documented and the method appears to have changed repeatedly, but once you find the right resources it works well.

Monday, October 29, 2012

The Internet needs weeding

In librarian terminology, Weeding is jargon for the process of going through a collection of works (books, magazines, etc) and removing ones that are no longer worth having. These works may be:

  • Out of date to the point of uselessness (like Windows 95 for Dummies);
  • Damaged and worn out;
  • Discredited;
  • Superceded by new revisions;
  • Surplus to requirements, where they're potentially still useful but space is needed for other more important things; etc

Why do you care? Because the Internet needs weeding, too. Right now individual website operators must take responsibility for that themselves. Some of them aren't; either they can't manage their large libraries of ageing content or they just don't want to.

This TechRepublic article was dubious when it was written, and it's now amazingly out of date and plain wrong, yet there's a steady stream of comments suggesting that people still refer to it. This article, from 2002, doesn't bother to mention little details like version numbers that might help place it in context. It claims, among other things, that MySQL doesn't support subqueries, views, or foreign keys. It also simply says that MySQL is "faster" and PostgreSQL is "slower". It's never been that simple, and it sure isn't now.

I discovered it because someone linked to it on Stack Overflow as if it was current information. Someone who usually does a fairly decent job writing informative answers; they just didn't bother to look at this particular article and see if it was any good before citing it.

In print, at least you can look at a book and go "Ugh, that's old". When an article has been carried over to a site's nice shiny new template and is surrounded by auto-included content with recent dates and context, how's a newbie to know it's complete garbage?

By the way, I don't claim it's easy to manage a library of tens or hundreds of thousands of ageing articles. Periodic review simply isn't practical. Websites that host large content libraries need to provide ways for users to flag content as obsolete, misleading, discredited or otherwise problematic. They also need to make an effort to ensure that their articles will age well by including prominent versions, dates, "as of ..." statements, etc at time of writing. This article would've been OK if it'd simply said "PostgreSQL 7.2" and "MySQL 3.3" (for example) instead of just "MySQL" and "PostgreSQL". It's easy to forget to do this, but being responsive to feedback means you can correct problems and remain a reasonably reputable source.

One of the things you and I - the community - can do is to flag use of these articles when you see them linked to, and try to contact site owners to take them down, add warnings indicating their versions and age, or otherwise fix them.

Time for me to try to have a chat with TechRepublic.

Wednesday, October 24, 2012

More uses for PostgreSQL arrays

Arrays and the Pg extensions to them are very useful for solving SQL problems that are otherwise tricky to deal with without procedural functions or tortured SQL. There are some good tricks with arrays that're worth knowing about, but aren't always immediately obvious from the documentation. I want to show you a few involving ANY and ALL, intarray, and array indexing.

Friday, October 19, 2012

Natural sorting: An example of the utility of Pg's composite types and arrays

While looking at a recent stack overflow question I found myself wondering if it was possible to write a natural sort for strings containing numbers interleaved with non-number text using only PostgreSQL's core functionality.

Natural sorts are an important usability feature, as Jeff points out in his post on natural sorts above.

So I asked for ideas, and it turns out that yes, you can, though it's a bit long-winded. Props to Erwin Brandstetter for persistently refining the approach. The general idea is to create a composite type of `(text,integer)` then sort on an array of that type. See the linked question for details.

This illustrates how powerful Pg's composite types and arrays are, though I'm not sure you should consider any of the proposed solutions for real world production use.

It it also helps to show how nice it'd be to have access to native OS-independent Unicode collation in PostgreSQL using the International Components for Unicode (ICU) project, which would not only solve those nasty Windows-vs-Linux locate name issues when restoring dumps, but would also allow the use of advanced collation flags like UCOL_NUMERIC_COLLATION.

I'd really love to be able to use a custom collation function in Pg, either via an ORDER BY extension or by creating a collation that uses a user-defined collation function then using that collation in the COLLATE clause. Then I could write a C function to use ICU to do the special collation required for a particular job. This doesn't appear to be possible at the moment.

I recommend reading Jeff's post on natural sorting and why it's important; as usual, it's excellent.

Thursday, October 18, 2012

Generating random bytea values in PostgreSQL

While playing around with answering an interesting question on dba.stackexchange.com I wrote a simple C extension to PostgreSQL that generates random bytea values of a user-specified size. Fast.

In case anyone else is looking for a good way to dummy up random binary data in PostgreSQL, you can find the code in my scrapcode repository on GitHub

See the extension's README for details.

There's a pure SQL version fast enough to use for generating a few 100s of KB of data, too, or a couple of MB if you're patient.

Sunday, October 14, 2012

Avoiding PostgreSQL database corruption

TL;DR: Don't ever set fsync=off, don't kill -9 the postmaster then delete postmaster.pid, don't run PostgreSQL on network file systems.

Reports of database corruption on the PostgreSQL mailing list are uncommon, but hardly rare. While a few data corruption issues have been found in PostgreSQL over the years, the vast majority of issues are caused by:

  • Administrator action;
  • Misconfiguration; or
  • Bad hardware

A recent mailing list post asked what can be done to reduce the chance of corruption and keep data safe.


If you think you have a corrupt PostgreSQL database, stop the database server and take a complete copy of the data directory now. See the Corruption page on the wiki. Then ask for help on the pgsql-general mailing list, or for critical/urgent issues contact a profesional support provider.

Do not attempt to fix the problem before taking a complete copy of the entire data directory. You might make the problem much worse, turning a recoverable problem into unrecoverable data loss.


Here's my advice for avoiding DB corruption issues, with some general PostgreSQL administration advice thrown in for good measure:

Sunday, September 23, 2012

PostgreSQL packaging on Mac OS X is a mess

It appears to me - based primarily on what I see on Stack Overflow rather than direct Mac use experience - that PostgreSQL packaging on Mac OS X is a real mess.

There are at least four widely-used competing package systems:

Fink and MacPorts packages also exist, but seem to have either fallen into disuse or "just work" so I don't see breakage reports about them.

Thursday, September 20, 2012

If you haven't yet read the DataGenetics post on PIN frequency, you should. It's an amazing article with some extremely impressive data visualisation.

If you see your bank/ATM PIN in the frequency tables there, smack yourself for being predictable, then go have a chat with a random number generator.

I was unsurprised but relieved to see that I'm unpredictable, but not suspiciously and unusually unpredictable. Just where you want to be.



BTW, if you're a software developer who has anything even tangentially to do with security and you don't know what "hash" and "salt" mean or the difference between hashing and encryption, consider yourself dead-fish-slapped. Go. Learn. Now, before you contribute to this dataset.

Thursday, August 23, 2012

For anyone who programs in any language or designs databases and formats

Anybody who does any kind of design or programming needs to read these two articles. So should anybody who designs data formats, database schema, or specifies any kind of standards.

The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)

Falsehoods programmers believe about names

If you haven't read - and acted on - these then you're probably producing bugs at a fair rate. I harp on about some of these points fairly regularly, so I thought I'd collect the articles in one place.

Sadly it's often hard to avoid the whole "exactly one name structured in two parts" thing the real world, because you're often dealing with other mis-designed systems that expect them to be different fields. Some of those systems aren't even software; they can be business processes, legal processes, and more.

The same applies to gender/sex, where it really isn't as simple as that [M/F] radio button or pulldown you probably have ... but half the legal processes and 3rd party APIs you work with think it is. Your user is trans-F-to-M or XXY indeterminate? They just have to shove themselves into a box.

Sadly, most people with non-western-Eurpoean names living in western countries are used to butchering their names to fit the split name model. For that matter, people with western European names are used to butchering them to work in systems that like to turn "Renée" into "Renee" or - depressingly frequently - "Renée". Does your last name have a space in it? You're doomed to being "Jacobsen, James van", "Jacobsen, James Van" (thanks to "helpful" auto-capitalisation) or even "Jacobsen, James V." forever.

Would you like to have your name "corrected" - to something wrong - or fail to validate in every second system you use? If not, consider those for whom that's true and fix your software. Ditto for gender/sex - don't ask for it if possible, and if you must, provide a free-form field.

I usually compromise by having a "display name" field that's used throughout the application, and by making no assumptions about names being unique, comparable, or in any way divisible. If interaction with 3rd party systems that want split-form names is required I have an additional field for the user to enter the name they usually use in identity documents, etc, but I don't use it within my app and I make its purpose clear.

Tuesday, July 17, 2012

Tips for interactively debugging CDI applications

I've grown to really like Contexts and Dependency Injection (JSR299, CDI), a part of the Java EE 6 suite. It's relatively simple and clean, it's extensible, and it allows for a really nice loosely-coupled programming model. It gives you the freedom to use event-driven or direct call operation and can take care of most of your lifetime/lifecycle issues for you.

Of course, this is my blog, so you know there's a "but". Sure enough there is, albeit a pretty minor one: It's a mess for interactive debugging, because stepping through a CDI invocation takes you through layers of weld proxies, scope lookups, and all sorts of other crud you usually don't want to see. LOTS of it. CDI isn't unique in this respect, as anyone who's stepped through EJB calls will know, but it's perhaps worse than some due to extensive use of proxies, interceptors, etc.

There's a solution in Eclipse. It could be simpler and it could be more complete, but it's a heck of a lot better than nothing.

Java EE 7 needs improvements in app configuration mechanisms

Most applications need configuration options or preferences, a way for the user to edit them, and a way for the application to read and edit them. Java EE applications are no exception. Right now Java EE applications have numerous options for such settings - but no single choice that's simple, admin- and app-author friendly, and easy for both admin and app to modify.

This is not a new issue, as can be seen with this 2001 question on The Server Side. Back then they suggested *using JMX* ... which doesn't exactly fit the *simple* requirement.

Java EE is supposed to provide much of the groundwork for apps as part of the container, so the app author can get on with solving their problem. Right now it doesn't help much with application settings, and this needs improvement so that apps:
  • Don't need to each provide a custom UI in order to edit even the simplest settings
  • Can easily modify their own settings
  • Can guarantee the persistence of their settings across redeploys and across cluster nodes
  • Don't require a full database and data access layer / JPA / etc just to store simple configuration options.
I'd like to highlight this as an issue for Java EE 7 or 8.

UPDATE: Marcus Eisele pointed out a related post by Antonio Goncalves as part of a jsr342-experts post on configuration. Have a look at Antonio's piece; he's looking at it from a slightly different angle, but with many of the same issues.

See also this stack overflow post from a while back, where I asked for help on this topic and got crickets.


Vaadin is pleasantly productive

I think Vaadin 6 is the first truly productive web UI framework I've worked with. Productive for me, anyway; experiences will vary depending on the task at hand and the developer.

I strongly recommend Vaadin 6 for anyone doing web applications - highly stateful tools where user counts are relatively low and UI complexity is relatively high.

The only real caveats I have to that recommendation are that:

  • JPAContainer is very inefficient - largely due to an API design problem in Vaadin's Container API that makes it very hard to use the entity manager efficiently. Alas, this design flaw hasn't been remedied for Vaadin 7, so it's sometimes best to avoid JPAContainer and drive a Vaadin table from the outside instead.
  • JPAContainer doesn't play well with Hibernate and lazily loaded entities. A workaround is provided, but it's clearly more used with EclipseLink. This won't be a problem if you drive your tables with your own data model code.
  • Vaadin's stateful design means that each client costs you a non-trivial amount of memory on the server. If you want to serve large numbers of users who're doing simpler things you might want to consider a stateless JAX-RS API driven by client-side JavaScript and a lightweight templating engine, or look into one of the stateless server-side frameworks.

Vaadin 6 feels a little bit old at times, and you can certainly see its Java 1.4 history in more frequent use of casts rather than generics. It also requires an extension to integrate it with CDI - even then, only with pseudo-scopes.

Vaadin isn't beautiful, but damn, it works. Works well, and reliably. It's been at least two weeks since I found a bug in the tools I've been using, and that has never happened to me since I started using the Java EE stack!

Despite its flaws, JPAContainer is also impressively powerful. I really hope an enhanced Container API can be introduced to Vaadin 7 so it can reach its potential as a truly awesome RAD tool.

Tuesday, July 3, 2012

Waits 'till you're asleep ... and [X]



Did the marketing people not even think about this at all?

Caption contest time!

The uncropped original shows I'm not just cropping off important context; that ad is just a little disturbing.

Smart Stay - it waits 'till you're asleep then ...

PostgreSQL rocks, and so does explain.depesz.com

I was modifying one of my aggregate views today and I was struck by just how impressive PostgreSQL is, and how powerful relational databases in general really are. PostgreSQL was executing queries against the view a completely different way when one entry was being queried vs when the whole view was being requested.

You might take this for granted - but when you think about it, it's seriously amazing how a database can take your description of what you want and work out the how for its self. Mostly.

Imagine writing this yourself. Say you're working in Java. You have a complex Criteria query against several different tables related to information about customers. You now want to get information for just one customer, so you add a WHERE clause to the top level. If the database didn't nearly magically push this filter criterion down into all those complex sub-queries and joins you'd be at it for hours; days or weeks if you wanted to make it re-usable and generic across a set of similar criteria queries.

Instead, the DB just does it for you.

Tuesday, June 26, 2012

Update on JPA 2.1 and fetch control

UPDATE: JSR388 has been released with support for "fetch groups", which at first glance seem to meet this need.

I re-posted my query to javaee-spec-users after seeing no response on the jpa-spec users list. A reply from Linda DeMichiel on the javaee-spec list looks promising.

Separate discussion (twitter) suggests it's conference season so everyone's busy and sidetracked as well.

The spec proposal does indeed list:
Support for the use of "fetch groups" and/or "fetch plans" to provide further control over data that is fetched, detached, copied, and/or used in merging.
but I'd seen no discussion of it on the EG list, which I've been monitoring since near start. (UPDATE: There'd been a wee bit, I just missed it.) I didn't realise they were working to an ordered, specific agenda, though it certainly makes sense to tackle such a big task in small chunks like that.

The JPA 2.1 spec proposal also has some other good stuff. Points of particular interest to me include:
  • Methods for dirty detection
  • Additional event listeners and callback methods; availability of entity manager to callbacks
Dirty detection in particular has caused me a lot of pain in the past, so it's good to see it on the list. It'd simplify quite a few tasks around working with JPA, particularly in apps with longer-lived sessions like interactive GUI / swing apps, Vaadin web apps, etc.

One thing that isn't there that perhaps should be is handling failure more cleanly. Currently one needs to clone entity graphs before attempting to persist them using a tool like org.hibernate.util.SerializationHelper, because they may be in an inconsistent state if a persist operation fails. It's slow and ugly to need to serialize/deserialize to clone an entity graph, but hard to get around because dynamic weaving and enrichment means you can't properly clone an entity just by allocating a new one and copying properties into it. Worse, you have to handle all the cascades yourself. Many common use cases involve situations where failure is normal: serialization failures of serializable transactions, optimistic locking issues, etc. Often you want to re-fetch the entities and re-do the work from scratch, but that's painful if you have user-modified state in those entities and all you want to do is retry a merge after a database serialization failure.

Guess I'd better pipe up about that too. It isn't news, but if it isn't on the agenda maybe it should be added for discusson.

Now is the time to join the jpa-spec users list on http://java.net/projects/jpa-spec and pipe up about practical, real-world issues you face with JPA. Even if everyone's out conferencing, email queues up nicely.

Monday, June 25, 2012

Updated AS7 <-> EclipseLink integration

I've pushed an update to the EclipseLink <-> AS7 integration library github.com/ringerc/as7-eclipselink-integration.

Version 1.1 now produces a proper JBoss AS 7 persistence provider integration module. It'll automatically inject the right properties into EclipseLink, so you don't need to modify persistence.xml or set system properties anymore.

I haven't found a proper solution to the null static metamodel issue or the dynamic weaving issues. Rich's code for VFS integration plus the logging integration code and the rest all just works automagically now, though.

I'd like to add better integration tests, but I'm being held back by today's JBoss AS 7 issue, https://issues.jboss.org/browse/AS7-3955 .

NOTE: This version of the integration code doesn't seem to work on the latest AS nightly, but it's fine on 7.1.1.Final.

Hope this is useful.

Saturday, June 23, 2012

Mail to the JPA 2.1 Expert Group re fetch control

I'm setting up another opportunity to look foolish - which I view as a good thing; if I don't risk looking foolish, I don't learn nearly as much.

I've mailed the JPA 2.1 EG re control over fetch mode and strategy on a per-property, per query basis, as I'm concerned this may not otherwise be considered for JPA 2.1 and thus Java EE 7. As previously expressed here I think it's a big pain point in Java EE development.

I'd appreciate your support if you've found JPA 2's control over eager vs lazy fetching limiting and challenging in your projects. Please post on the JPA 2.1 users mailing list.


UPDATE:: JSR388 has been released with support for "fetch graphs", a feature that appears to meet the specified needs. I'm no longer working with Java EE or JPA, so I haven't tested it out.

UPDATE: http://blog.ringerc.id.au/2012/06/update-on-jpa-21-and-fetch-control.html

UPDATE: Many of these problems are solved, albeit in non-standard and non-portalbe ways, by EclipseLink. EclipseLink extensions can be used to gain much greater control over fetches at the entity definition level and more importantly via powerful per-query hints. Glassfish uses EclipseLink by default, but you can use EclipseLink instead of Hibernate as your persistence provider in JBoss AS 7.

Friday, June 22, 2012

Getting EclipseLink to play well on JBoss AS 7

It's currently a bit tricky to get EclipseLink to work smoothly on JBoss AS 7. There's a good write-up on it by Rich DiCroce on the JBoss Community site that I won't repeat; go read it if you want to use EclipseLink with AS7.

I've packaged Rich's VFS integration code, some JBoss loging integration code, and a platform adapter needed for older versions of EclipseLink into a a simple as7-eclipselink-integration library that can be included directly in projects or bundled in the EclipseLink module installed in AS7.

The library build produces a ready-to-install AS7 module for EclipseLink with the integration helper code pre-installed.

It should simplify doing the integration work on a project.

Thursday, June 21, 2012

NullPointerException on @Inject site even with beans.xml?

Are you staring at a stack trace from a NullPointerException thrown by access to a CDI-injected site?

Have you checked to make sure beans.xml is in the right place:


Still not working? Does a message about activating CDI appear in the server log, but injection still not work?

Make sure you used the right @Inject annotation. Especially with code completion and dependencies pulling Google's Guice onto the classpath it's easy to land up accidentally choosing com.google.inject.Inject instead of javax.inject.Inject.

Yep, I just wasted twenty minutes staring at code that wasn't working before I noticed this. Time to find out which dependency is pulling in Guice, add an exclusion, and nag them to make it an <optional>true</optional> dependency. It looks like in my case it's coming from org.jboss.shrinkwrap.resolver:shrinkwrap-resolver-impl-maven, so a quick:

<exclusion>
<groupId>org.sonatype.sisu</groupId>
<artifactId>sisu-inject-plexus</artifactId>
</exclusion>

ensured that mistake wouldn't happen again.

JPA 2.1 will support CDI Injection in EntityListener - in Java EE 7

It's official, JPA 2.1 in Java EE 7 will support injection into EntityListener. I've checked the JPA 2.1 spec, and in draft 3 the revision notes state that:

Added support for use of CDI injection in entity listeners. Added requirement for Java EE container to pass reference to BeanManager on createContainerEntityManagerFactory call.

This is good news. The spec needs more eyes. If you use JPA, you need to have a look. Check to see if any pain points you encounter in JPA 2 are addressed. Check the revision notes at the end and see if something jumps out at you as being problematic or unsafe. If you have concerns contact the EG. More eyes on draft standards means fewer problems in released standards. EclipseLink has already implemented JPA 2.1 Arithmetic expressions with Sub-Queries, option support for select and from clause subqueries and JPA 2.1 JPQL Generic Function support so go try them out in EclipseLink 2.4.0 pre-releases using the latest milestone.

It might be worth trying out the new goodies too. Sometimes a feature seems reasonable on paper, but works very poorly in practice. The spec-before-implementation development model of Java EE means we get to "enjoy" lots of those issues; help prevent more by testing the implementation early.

I didn't see anything about fetch strategies and modes, so I'm going to have to go nag them again. I had success the first time around in pushing for CDI injection into EntityListener, but the spec is significantly further along now. OTOH, the difficulty of controlling eager vs lazy properties on a per-query basis is a big pain point.

UPDATE: done, let's see what happens.

(BTW, it seems a shame that JPA 2.1 is being tied to EE 7, as that's still a long way off.)

JPA2 is very inflexible with eager/lazy fetching

This must be a common situation with JPA2, but seems poorly catered for. I feel like I must be missing something obvious. It's amazingly hard to override lazy fetching of properties on a per-query basis, forcing eager fetching of normally lazily fetched properties.

There doesn't appear be any standard API to control whether fetching of a given relationship or property is done eagerly or lazily in a particular query. Nor is there a standard hint for setHint(...) - EclipseLink offers some limited control, Hibernate has no hint equivalent to its own Criteria.setFetchMode( ). JPQL/HQL's left join fetch and the (somewhat broken) Criteria equivalent scale very poorly to more than a couple of properties or to nested lazy properties, and don't permit other fetch strategies like subselect or SELECT fetching to be used.

Tell me I'm wrong and there's some facility in JPA for this.

Read on before replying with "just use left join fetch". I wish it were that simple, and not just because a join fetch isn't always an appropriate strategy.

Tuesday, June 19, 2012

JBAS011440: Can't find a persistence unit named null in deployment

Encountering the deployment error "JBAS011440: Can't find a persistence unit named null in deployment" on JBoss AS 7? Using Arquillian?

You probably put your persistence.xml in the wrong place in your archive. Try printing the archive contents.

Friday, June 15, 2012

Lenovo "1802: Unauthorized Network Card" update

Progress made with Lenovo; see my update to the original post on this topic.

Overall, mixed results. Quick and positive immediate response followed by a quick return to talking to a brick wall. We'll see.

The website hasn't been updated yet, but it's only been a couple of days and it is a big website. I'd be amazed if it had been.

Monday, June 11, 2012

Lenovo sales/support can't do their jobs if Lenovo don't tell them the facts

I just bought a Lenovo T420. Of course, the T430 promptly came out, but that's not why I'm very angry at Lenovo. This is:


1802: Unauthorized Network Card is plugged in - Power off  and remove the miniPCI network card (413C/8138).

System is halted


UPDATE 2012-11-17: It's been six months, and I'm out of patience. I'm filing an ACC complaint..


UPDATE 2012-06-24: I've received the replacement 3G card from Lenovo, and it works well. The Gobi 3000 based Sierra MC8790 card is a bit ... challenging ... under Fedora 17 and I wish I'd just been able to use my existing, working card. Still, this one should be lots faster when it's working, and it works under Windows at least.

They still haven't updated their website, so it's going to be nagging-time soon. Not impressed.


UPDATE 2012-05-15: After some help by lead_org on the Lenovo forums I was able to get in direct contact with Lenovo ANZ's customer care manager, customer care team lead, and social media managers. An amazing three hours after my initial email, I received a helpful reply. I must paraphrase as I don't have the writer's explicit permission to quote the full message; it indicated that:

  • This was the first time the Australia/New Zealand group had seen this issue
  • They were sorry for the disruption and would courier me a compatible 3G card
  • The website would be fixed to inform potential customers of the restriction, and would educate their sales/support staff about it
  • (I'll quote this bit; it's a formula match for the line I see everywhere else on this topic):
    With regards to the whitelist, it has existed since the introduction of modular cards - Lenovo have to qualify particular wireless devices not just internally but with the FCC and other telecommunications bodies around the world, hence the control around this list exists. As such there is no way to bypass this list as it would effectively allow violation of legal statutes with regards to telecommuncations devices.

Kudos for a quick response, mea culpa, action to stop it happening in future and a gesture to make it right for me. Good on them.

I'm less thrilled by the vague and waffly detail-free explanation of the whitelist; I've been unable to find anywhere where Lenovo or a Lenovo rep has clearly stated which regulations affect them, but not Dell, Acer, or numerous small vendors. I asked for clarification on this point but have received no reply to date.

The important thing is that they'll fix their website and rep training to make sure others aren't mislead; in the end, their reason for the whitelist can be "because we can and because we feel like it" and so long as they're up front about it that's less bad.

Related writings by others:


Note the common theme: complete surprise that their laptop refuses to work, because they had no warning about the lockdown until they already had hardware for it. Not cool.


That error informs me that Lenovo has chosen not to permit me to use my 3G cellular modem card in my laptop. I'm not leasing this thing off them, I bought it outright. This isn't a technical limitation or restriction, it's a policy decision enforced by the system's EFI firmware or BIOS.

Before I bought this laptop, I'd heard that Lenovo and IBM before them used to restrict installed Mini-PCI-E and Mini-PCI cards in BIOS, refusing to boot if a non-Lenovo-branded 3G or WiFi card was installed. I had a 3G card I wanted to use, and anyway didn't want to be part of that sort of thing, so I called to confirm they didn't still do that.

The sales rep assured me in no uncertain terms that there is no such restriction. I was promised that a Lenovo T420 will boot up fine with a 3rd party 3G or WiFi card installed, though of course they can't guarantee the card will actually function, I'd need drivers, and they won't provide tech support for it. Fine.

I didn't completely trust the sales person - though they knew what MiniPCI-E was, so they were ahead of the curve - and called support. I asked them the same thing: Would the laptop refuse to boot and give me an error in BIOS if I put a non-Lenovo 3G or WiFi card in. I specifically asked about whether it'd give me a BIOS error and refuse to boot. Not only did they assure me it wouldn't, but they said the card would work out of the box if it were the same model as one of the ones Lenovo sells. No misunderstandings here, we weren't talking about USB stick 3G, but mini-pci-E.

At this point, let me propose to you a game. Be nice, though, it isn't the fault of the sales and support reps, they're getting mislead and misinformed to as well. Go to shopap.lenovo.com or your local Lenovo online store, or call their sales number for your region. Ask the poor innocent who is just trying to help you whether you can use a 3G card from your old Dell laptop in the T420 you're thinking of buying, since it's a mini-pci-E card and you have all the drivers. Help them; mention that you've heard that Lenovo and IBM before them used to stop machines starting up if there was a non-Lenovo wireless card in them. Ask them if that's still the case. Copy and paste your chat to the comments if you feel like it, but be very sure to blank out the rep's name or I'll get really angry at you too.

Betcha they'll tell you everything will be fine. Mine did and I have a record of it.

It seems Lenovo's corporate decision makers don't tell its sales and support reps everything they need to know:



That's from my new Lenovo T420, a great machine except for the whole locked-down-so-you-can-only-use-our-branded-hardware thing. (OK, and the lack of USB3/bluetooth4, but we can't have everything).
Of course, it's not surprising the sales and support folks don't know since the issue isn't documented anywhere in the tech specs, so-called datasheet (sales brochure) for the current model T430, or the T420/520(US) or T420/520(AU) etc. here they advise you to select a "wireless upgradable model" but don't bother to mention the card'd better be from them, Or Else.

It isn't even in the hardware maintenance manual! Points for Lenovo for publishing this, unlike most vendors, but it clearly needs to be a wee bit more complete.

There's even a knowledge base article about it, but you have to know the error code to search for before you can find it. The ThinkWiki article on it wasn't obvious either, isn't on Lenovo's site, and isn't something you're likely to find until you've already been burned.

The only official mention I found outside that knowledge base article was a tiny footnote on the product guide for resellers, where it says on page 160 (not kidding) regarding 802.11 WiFi:
Based on IEEE 802.11a and 802.11b. A
customer can not upgrade to a different
wireless Mini PCI adapter due to FCC
regulations. Security screws
prevent removal of this adapter. This wireless
LAN product has been designed
to permit legal operation world-wide in
regions which it is approved. This product
has been tested and certifi ed to be
interoperable by the Wireless Ethernet
Compatibility Alliance and is authorized
to carry the Wi-Fi logo.
and on 161 regarding "Wireless upgradable" 3G/4G (terms "3G" and "4G" not actually used in document):
Wireless upgradable allows the system
to be wireless-enabled with an optional
wireless Mini PCI or Mini PCI Express
card. Designed to operate only with
Lenovo options.
None of this handy information was, of course, on the store page, laptop brochure, or specs where customers might actually see it, and was unknown to their own sales and support reps, who've been kept completely in the dark. Nor does it say, even in the above sales-rep-info footnote, "firmware will disable the laptop if 3rd party card detected"; it says it's only designed to work with Lenovo parts, not that it'll actively and aggressively stop you you using non-Lenovo ones.

Not happy. Their sales chat people on the website assuring me that the sky is purple and full of ponies is one thing, but their support folks not knowing this is something else entirely. It's not their support folks' fault their employer is lying to them, but it most certainly is the fault of Lenovo management for enforcing this policy and keeping it secret from the support and sales teams.

What makes me angriest is that the website for 3G capable models doesn't say anything about this restriction, nor is there any link about it regarding WiFi cards, or any mention in the specs for the MiniPCI-E slots or antennae, or ANYWHERE.



"Datasheet" excerpt specs from the T420. No sneaky little asterisks. No small text elsewhere.



The most important lesson in Java EE

The single most important thing I've learned about Java EE while working with it for the last two painful years is:

UPDATE: ... is not to post something significant that relates to the work of people who you respect when very angry about something completely unrelated. Especially after a bad night's sleep and when rapidly approaching burnout at work. Except, of course, that I apparently haven't learned that.

I'll leave the following post intact and follow up soon (after sleep!), rather than edit it in-place. I know it's an unfair and unreasonable post in many ways, but I guess it's also thought provoking if nothing else. If you read on, be sure to read Markus Eisele's response, where he makes some good and important points.

A proper follow-up is to come. The original post:


"Final" or "Released" doesn't mean "Ready", only "Public Beta"

I suspect people are only now starting to use Java EE 6 for real production apps. There's been tons of hype and fire, but so many of the technologies have been so amazingly buggy I'm amazed anybody could build much more than a Hello World with them until recently.

When a new Java library or technology is released, resist temptation. It's not like a PostgreSQL release where you can immediately start working on it. It isn't finished. It isn't documented. It isn't ready. Ignore the hype and wait three to six months before going anywhere near it unless you want to be an unofficial beta tester and spend all your time reporting bugs. Like I have. Over, and over, and over again.

I haven't seen a single product released in a release-worthy state yet. I've seen a hell of a lot released quite broken:

  • The whole CDI programming model was broken for the first six months to a year of Java EE 6's life, on both JBoss AS and Glassfish. Weld (the CDI RI) took at least six months after "final" release before the most obvious and severe bugs were ironed out, and was the cause of many of the worst bugs in Glassfish 3.x.
  • Glassfish 3.0 was unusably buggy
  • Glassfish 3.1 was still severely buggy especially with CDI projects until at least 3.1.1 + patches
  • JBoss AS 7.0.0 was missing whole subsystems and didn't become usable until 7.1.1.Final, though it's FANTASTIC now
  • Arquillian 1.0.0.Final wasn't really baked yet though at least it worked amazingly well once the deficiences were worked around.
  • Mojarra is IMO only barely useful now, two years after release
  • RichFaces 4.2.x was still biting me with bugs whenever I tried to do anything with it. Unicode bugs, CDATA/escaping bugs, lifecycle bugs, you name it.
  • Seam 3 was released as "3.0.0" when only some of the modules worked, and those only for JBoss AS. A year after release it's pretty solid, but if you tried to use it in the first few weeks or months like I did you would've suffered - esp if you tried using it on Glassfish.

Seriously, be ultra-conservative if you value your productivity.

Wednesday, June 6, 2012

Why don't we have Target Disk Mode for non-Apple machines?

I'm not an Apple fan, but there's one thing that consistently makes me really jealous about their hardware.

Despite generally scary-buggy EFI firmwares in their Intel CPU based machines, Apple's firmwares support what they call Target Disk Mode. This is a tech support and service dream, and has been supported since Apple moved over to "New World" PowerPC machines with their Forth based OpenFirmware, ie for a very long time.

Target disk mode is great for data recovery, OS repairs, OS reinstalls, disk imaging, backups, accessing data on laptops with broken displays without having to rip the HDD out of them, and lots else. It's just great. Unless you have an Apple you don't get it, and there's no longer any good reason for that.

UPDATE July 2012: Kernel Newbies reports that support for exporting SCSI over USB and FireWire has been merged into the Linux kernel. This will make it much easier to produce bootable USB keys that export the host system's hard drives. Unfortunately it's useless with most machines as it appears to require a USB gadget or OTG port, or a FireWire port.

Tuesday, June 5, 2012

Be careful if using Vaadin with Maven

As I'm getting more and more sick of bugs in JSF2 and RichFaces 4 and making painfully little headway, I thought I'd give Vaadin a go with a side project I'm fiddling with.

First impression, three hours in: The Vaadin developers don't use or get Maven, and while there's official Vaadin support for Maven their support is only superficial. The Vaadin Eclipse plugin only kind-of works with m2eclipse, scattering generated files throughout the src/ tree, putting library jars in src/main/webapp/WEB-INF/lib, adding jars directly to the Eclipse build path for the project, etc. It's not really a good maven citizen and needs plenty of encouragement to do the right thing, though it does seem to work once you've undone all the damage it does when enabled on a Maven project.

To clean up, you seem to need to:

  • Enable the Vaadin facet in your project properties Project Facets section. Do not click on "Further configuration available..." or, if you do click it, make sure to uncheck "Create project template..." if you already have the vaadin servlet in your web.xml.


  • Check web.xml to make sure the Vaadin facet hasn't added a second copy of the vaadin servlet when it was enabled.

  • Remove src/main/webapp/WEB-INF/lib and instead add a dependency on com.vaadin:vaadin:6.7.1

  • Also add a dependency on com.google.gwt:gwt-user:2.3.0 because the vaadin artifact pom doesn't declare the dependency

  • In your project properties, under java build path, remove the VAADIN_ entries from the Libraries tab. Otherwise you'll have those on the build path as well as the Maven-provided dependencies, which could get mismatched-versions-tastic in a hurry.

  • Add src/main/webapp/VAADIN to your .gitignore; it contains generated code that's being dumped there not in target/.

In my first three hours with Vaadin, I reported seven bugs, though only three are more than cosmetic/trivial and are reproducible. Not a reassuring start, but I'm going to give it a chance because they're all related to Maven integration in the Eclipse plugin and to slightly dodgy artifact packaging. They might make a great tool and just not get Maven.

Wednesday, May 30, 2012

Downloading sources and JavaDoc for JBoss AS 7.1 using Maven

If you work in an IDE like NetBeans, or Eclipse with m2eclipse, you'll be used to having the sources and JavaDoc of Maven artifacts downloaded automatically. This usually works well.

Sometimes you want the sources for things that aren't direct dependencies though. I most often run into this when debugging an issue takes me into the guts of an application server. For example, when debugging yet another annoying Mojarra issue on JBoss AS 7.1 I needed sources for several unrelated chunks of the server. I didn't particularly want to build the server from scratch; I was interested in behavior on a release build. So I just asked Maven to download the dependency sources and JavaDoc for me:

$ mvn dependency:resolve -Dclassifier=javadoc
$ mvn dependency:sources

Of course, this'll only help you if library packagers have bothered to bundle the sources and JavaDoc of their library with their binary artifacts. This should be mandatory for upload into Central, but unfortunately isn't enforced or even strongly encouraged so artifacts are often missing sources. Maven can't (yet) use the SCM info to check them out for you, so you're stuck dealing with this mess yourself.

With the increasing profusion of complex dependencies in use in apps, we really need to make it mandatory to publish sources and JavaDoc to central alongside any binary artifacts for packages with open source licenses.

Can't check out a remote branch from git?

Trying to check a remote branch out with git, and having it deny the existence of the remote branch? Or report something like:

$ git checkout --track -b 7.1 upstream/7.1
fatal: git checkout: updating paths is incompatible with switching branches.
Did you intend to checkout 'upstream/7.1' which can not be resolved as commit?

You need to fetch the remote. Not just the branch you want; git fetch upstream 7.1 won't work, you need git fetch upstream to fetch branch refs etc. See http://stackoverflow.com/questions/945654/git-checkout-on-a-remote-branch-does-not-work and http://www.btaz.com/scm/git/fatal-git-checkout-updating-paths-is-incompatible-with-switching-branches/

There's also a handy tool I didn't know about despite using git for a fair while now:

$ git remote show upstream
* remote upstream
  Fetch URL: git://github.com/jbossas/jboss-as.git
  Push  URL: git://github.com/jbossas/jboss-as.git
  HEAD branch: master
  Remote branches:
    7.1    new (next fetch will store in remotes/upstream)
    master new (next fetch will store in remotes/upstream)
  Local ref configured for 'git push':
    master pushes to master (up to date)

Extremely handy Java debugging trick

Tuesday, May 29, 2012

Adventures in Eclipse-land - Coming to Eclipse from NetBeans

When working with JBoss AS 7, it appears that it's just assumed you'll be working with Eclipse. Nobody seems to really talk about NetBeans, the IDE I've been using most of the time for the last few years. I thought there had to be a reason for this, and decided to give it another go.

I'll be jotting things as I go in the hopes of helping others out, and to highlight potential usability issues.

So far, it's pretty simple. Eclipse is painful to use in places, does some things quite strangely, and it's all worth it for the dynamic web module. I cannot possibly express how wonderful it is to use after the compile-redeploy-test cycles I've been doing while using JBoss AS 7 on NetBeans.

Friday, May 25, 2012

PostgreSQL usability - PgAdmin-III and Pg need some usability love

As anyone who's read much here will know, I'm a huge fan of PostgreSQL, the powerful open source relational database management system. It is an amazingly project with a team that keeps on releasing high quality updates full of very useful new features and improvements.

This is my blog, so there must be a "but", right? You're quite right.

I already wrote a PostgreSQL: Great even when you can see the warts in response to a perhaps overly glowing write-up someone did a while ago. I'm not covering that again here; this post is specifically about a topic more and more dear to my heart, that of usability.

As a long-time UNIX geek I've always found PostgreSQL quite easy to use, because I live on the command line where psql is just wonderful. Recently, though, I had to use PostgreSQL on Windows, and in the process couldn't help seeing it from the new user's point of view. In fact, I tried to think like a new user as I performed the tasks I needed to do.

It wasn't all roses. I knew enough about Pg to get past the issues pretty easily, but some of them would be real roadblocks for new or even intermediate users, and I think they're worth highlighting. Many of the issues were with PgAdmin-III, but far from all of them.

This post started out as a minor critique of a few usability sore points in Pg and PgAdmin-III. As I went through step-by-step producing screenshots and thinking through the process as a user, though, I realised it's an absolute, complete and utter train-wreck. If this had been my first experience with PostgreSQL, I'd be a MySQL, MS-SQL or Oracle user now.

You CAN download legal Windows 7 ISO images, it's just well hidden

UPDATE 2015-04-08: It looks like the ISOs are no longer available from DigitalRiver, or at least not at their old locations, since Windows 7 went EOL.

Yesterday I had a frustrating time after discovering that the one and only non-OEM-contaminated Win7Pro x64 disk I had was faulty - half way through a win7 reinstall. All I wanted was a download of replacement media, as I had a license already. This should've been simple, but no! Microsoft direct you to your OEM or retailer in all documentation on the topic, never even hinting that you can just download the ISOs without needing MSDN.

I even called Microsoft support, which is a mark of desperation if ever I saw one. They were as unhelpful as expected, patronisingly explaining that the license key is from a computer manufacturer so I have to contact them for replacement media. Even though I don't want the OEM's butchered Windows install media, I just want stock win7. I don't want the mangled Windows the OEM supplies, and even if I did they'd only post it to me not offer a download. They also only supply service pack 0, and I needed SP1-integrated ISOs.

I have a license to the product. You can't use the media without a license. The media have no copy protection so it's not like they particularly try to prevent their copying/distribution. Yet the official story is that you just can't download it unless you've bought an new license to a retail version online through Microsoft Store. Pathetic.

After too long digging around the net I found out that, yes, you can just download the ISOs after all. Microsoft just apparently don't admit it or talk about it. That's customer-hostile even for them. Go here:

http://www.mydigitallife.info/official-windows-7-sp1-iso-from-digital-river/

or http://www.mytechguide.org/10042/windows-7-service-pack-sp1-official-digitalriver-download/

... to get service pack 1 ISOs. Legally, from Digital River, MS's online/digital distributor. Yes, that means you must have a valid license and key to use them, as normal.

They even provide SHA1 checksums, which NOBODY seems to publish for Windows ISOs or pretty much anything else in the Windows world.

  • Windows 7 Ultimate SP1 x86 SHA1:
    92C1ADA4FF09C76EC2F1974940624CAB7F822F62
  • Windows 7 Ultimate SP1 (SP1-U Media Refresh) x86 SHA1:
    65FCE0F445D9BF7E78E43F17E441E08C63722657
  • Windows 7 Ultimate SP1 x64 SHA1:
    1693B6CB50B90D96FC3C04E4329604FEBA88CD51
  • Windows 7 Ultimate SP1 (SP1-U Media Refresh) x64 SHA1:
    36AE90DEFBAD9D9539E649B193AE573B77A71C83

I've verified the x64 SP1-U checksum to match what I downloaded from Digital River. I haven't checked the others.

Thursday, April 19, 2012

Arquillian initially overpromises and frustrates, but delivers real benefits

Arquillian is changing fairly rapidly, and the Arquillian folks are paying a lot of attention to feedback. This post discusses Arquillian 1.0.0.Final, and more importantly the ShinkWrap Resolver and ShrinkWrap Dependency extensions. A lot has already improved since I wrote this, though most of it hasn't hit -Final versions yet.


After seeing a lot of talk, hype and excitement about Arquillian on twitter for several months, I finally got around to introducing it into a new project to give it a try. I'm told it'll make testing massively easier and save me tons of time, so using it is a no-brainer.

After my recent experience I recommend that you start using it too - but you'll need to be prepared for some rough edges and the need for workarounds until a few point releases have gone by.

To jump straight to the summary of it all, click here, or read on for the whole experience.

Wednesday, March 14, 2012

Fixing "Jam in Area A" (E3-6) on Xerox Phaser 5500

My Xerox Phaser 5500N has been failing to print with phantom "Jam in Area A" errors for the last few days. No paper is actually jammed in the printer, it just reports a jam after printing the first page. When viewing the jam log or the web interface, the jam code was "E3-6", which is documented in the manual only as "fuser area".

It turns out that the Xerox 5500 has a common failure point - arguably a design flaw - in the fuser exit switch.

Xerox support wanted $210 to come and look at the printer, wouldn't guarantee to even bring appropriate parts, and said it'd cost another $210 plus parts if they had to come back with parts to repair it. They wouldn't talk about the jam code or do any phone support of any kind at all. Payment for the site visit was to be made up front before they'd even book a tech, and they wouldn't give me an ETA before I paid. They don't sell parts, and won't provide service manuals. This made me very, very angry.

Instead of paying Xerox over $400, I paid the supermarket up the road $3 for some super-glue. After removing the fuser I could see that the fuser exit switch lever had broken, so I just glued it back on and got the printer working again. Depending on the nature of the fault, you might not even need the glue.

Tuesday, March 13, 2012

DIY data recovery

While perusing ZDNet Australia I encountered this article about data recovery, which appears to be a thinly veiled piece of advertorial about a data recover firm.

The article pissed me off. It doesn't mention the importance of preventative action like good, well-tested backups. It certainly doesn't bother considering the possibility that you can recover from common cases of data loss yourself or with help from a techie friend, avoiding paying huge sums to the DR firm.

Here are a few tips for recovering lost pictures, documents, etc from a hard drive that's in reasonable physical condition but isn't readable from the computer. The same techniques apply to flash media like Compact Flash, MMC, SD Card, USB memory keys, etc, many of which have the unreliable FAT32 file system on them by default and are very prone to being rendered unreadable by minor file system corruption.

You should not attempt these tips unless you can accept the small risk that you might actually make the problem worse. Most importantly, do not attempt any of these steps if you suspect your hard drive has a serious mechanical fault - say it stopped working after being dropped and now makes sqeaky scratchy noises, it was immersed, it was burned, etc. Attempting to power on a hard drive that's damaged like that will make later recovery harder, so you should take drives with serious physical damage straight to DR pros.

For the other 99% of cases, read on.

Friday, February 24, 2012

Humidity sensor plans made

Quite a bit of reading has suggested that capacitive FDR (Frequency-Domain Reflectometry) and TDR (Time-Domain Reflectometry) approaches are going to be impractical when very low cost is a requirement. Resistive continues to look pretty unattractive, so I think TDT (Time-Domain Transmissive) is going to be the approach I focus on.

All this would be much easier if the abundant prior work on this topic weren't locked away behind expensive journal paywalls or commercial secrecy agreements. The few people who seem to have done it for hobby purposes mostly show up asking questions on forums and mailing lists; they never seem to write their results up. Needless to say, I'm frustrated, but I'm going to try to do something about it.

So: In TDT, capacitance is measured by sending a pulse or wave down an insulated loop or line through the soil. The capacitance in the line is altered by the amount of water surrounding the line. As the capacitance of a conductor alters the time it takes for a pulse to propagate through the wire, the capacitance can be measured by timing how long a high pulse takes to propagate through the line.

Friday, February 10, 2012

There's more to the NBN than fast Internet

People keep on framing debate about Australia's NBN (National Broadband Network) as if it's all about building a new service for faster Internet and that keeping the existing copper POTS infrastructure is a viable option.

Cost of services to end users as compared to POTS introduction

Look at the history of telephone rollouts. Initially they were unaffordable for many and there was a great deal of doubt about their utility - after all, you could usually just go to the local post office if you wanted to make a call, so what was the big deal?

Things are a bit different now.

On the other hand, unlike telephone rollouts (and their later enabling of MODEMs and dial-up banks) the NBN replaces an existing service. In that regard it's quite different, so we can't draw a direct comparison with the infrastructure rollout for phones. One could argue that the NBN is improving an existing service, rather than creating a new service, and at its cost is economically unjustifiable.

The POTS network is ailing

There's a reason why copper is being decomissioned where the NBN is being rolled out, and it's not just to provide economic incentive to push people onto the NBN and help fund it.

The copper phone network is ailing. There aren't enough physical lines to service increasing population densities as discrete houses are replaced with flats, high density developments, etc. High-frequency cross-talk from multiple ADSL services on long parallel copper lines is degrading service and causing poorer results for all users. Junctions and pits are needing more and more maintenance as they age and corrode. Installing new copper is more and more expensive as the price for copper goes through the roof.

Money spent on the copper network is being sunk into a network that's going to have to be dropped or massively rebuilt at some point. There's going to be a point where it's better to stop spending on it and replace it instead.

Given that, the NBN rollout is the replacement of infrastructure that'll otherwise become more and more overloaded, ineffective, and expensive to operate. I increasingly see its actual performance benefits as secondary. It's more like replacing that falling-down school with a fancy new building that happens to have air-conditioning and pretty skylights, but has to be built anyway because the old one has to be knocked down and rebuilt soon one way or another.

Tuesday, January 31, 2012

SoWACS: The soil water moisture content measurement systems and sensors mailing list

If you've been following the soil moisture stuff I've been playing with you will have seen me referencing many others' work, both hobbyists and pros. This isn't new stuff, though it's poorly documented on the 'net and hard to find.

It turns out there's a mailing list dedicated to the topic - but you'd be really, really lucky to find it. Check the archives out here, they're seriously informative:

http://groups.google.com/group/sowacs?pli=1

The group moved from its old hosting to Google Groups a while ago, but the best stuff is in the old archive on sowacs.com. It's a bit patchy, frustratingly, and there don't seem to be mbox files of the archives to download, but it's still very informative.

Monday, January 23, 2012

I've had it with HTC - thanks for the rescue, CyanogenMod + AAHK

HTC pushed an Android 2.3.5 update to my Vodafone Australia-branded HTC Desire HD. There was no changelog, and along with the Android update it turns out I get a new version of HTC Sense (yay?) with all sorts of animations I can't turn off and extra bloat.

Great work HTC, you made the phone faster, then ruined it with more pointless animation. At least the "no window animations" setting used to work in the old version...

Friday, January 20, 2012

Atmel Microcontroller (non-ATmega/ATtiny compatible) with built-in 433MHz (US: 310MHz) transmitter!

While researching parts for my soil moisture sensors I stumbled across these awesome Atmel microcontrollers:

I was so excited I had to share. At about AU$8 each, these little beasties might make building wireless soil moisture sensors so much easier it's just not funny. The main problem is going to be ordering them, since Jaycar and Element14 don't carry them, and DigiKey has them as non-stock components with 4000 unit minimum volumes. They're 4-bit 8051-architecture micros so they're not going to be compatible with the ATmega or ATtiny range, so I lose the advantage of having the same arch on sensor and control system. For something as relatively simple as sampling an analog temperature and humidity sensor that may not be a big problem.

It may still land up being easier to use an ATtiny for the analog sensor controller and digital sensor data transmitter, so I can use (mostly) the same software tools as for the ATmega on the control board. I could then hook the sensor's ATtiny up to either some wiring for wired service, or to an RF transmitter IC for wireless operation without much if any change to the sensor codebase.

Atmel also have a family of RF receiver ICs (with matching tx modules or transceivers available) so I might be able to avoid the need for a breakout board / shield for the RF receiver support and just make it an optional component in the base design. Things like the ATA5723 /ATA5724/ATA5728 and the ATA5745 /ATA5746 RF receiver ICs could be awfully handy at about AU$4 each ... if I can find someone who'll sell them to me in less than 1,500 unit quantities. If not, there are lots of other highly integrated 433MHz RF receivers and transmitter ICs out there.

The ATA8204P3-TKQY looks particularly suitable; it's a slower and cheaper unit without UHF, but that shouldn't be a biggie for my use. It's cheaper than any of the other units except the ATA8202-PXQW 19 on digi-key, and should do the job fine. It's surface mount so it won't be assembly-friendly, though. An alternative might be the ALPHA-RX433S from RF-Solutions as that's packaged as a little module that'd be a bit saner to solder up.

Tuesday, January 17, 2012

DIY DC soil moisture sensor - early test successful

Laptop connected to Arduino connected to flowerpot

On the the progressive difficulty scale of home built soil moisture sensors the bottom of the ladder is a DC soil conductivity sensor that uses simple resistivity measurement.

It took a couple of hours build one of those last night, most of which was spent incompetently attempting to produce a decent solder joint on steel wire and on the cleaned heads of galvanized nails. Anyone who can use a soldering iron without being a hazard to themselves and those around them should be able to whip something like this up in a few minutes.

Monday, January 16, 2012

Interested in soil moisture sensors and irrigation control? Start with the UF/IFAS virual extension series

I've been having ... "fun" ... trying to find a way to build an affordable network of soil moisture sensors that don't require too much looking after.

It's harder than you'd think, but this UF/IFAS Virtual Extension series on soil moisture and irrigation has made it a lot easier to understand the different approaches and sensor types. It'll help you understand the differences between resistive and capacitive soil moisture measurement, introduce alternatives like tensiometers, etc. This is important whether you plan to DIY your sensors or buy off the shelf.

Saturday, January 14, 2012

Using a RHT03 (aliases: RHT-22, DHT22, AM2302) temperature/humidity sensor from Arduino

I picked up a nice compact little temperature and relative humidity sensor called the RHT03 for a project from Little Bird Electronics. It and very similar parts appear to go by the names RHT-22, DHT-22 and AM2302. You can find the part at SparkFun, Adafruit, etc too.

It took a lot more work to get it working than I expected, so I thought I'd write it up here for anyone else who is looking into it. There's sample code at the end of this post, but you should probably read the details because this is a quirky beast.

UPDATE: I since found a library on GitHub: nethoncho/Arduino-DHT22 that does a better job more simply and compactly. It works fine with my sensor. It needed some changes for Arduino 1.0 and some further tweaks to work how I wanted, so I've uploaded a fork here: https://github.com/ringerc/Arduino-DHT22.

Thursday, January 12, 2012

Extending Arduino example CIRC-05 to use hardware SPI control

For kicks, I've extended the basic Arduino shift register LED control example CIRC-05 from www.oomlout.com to use the Arduino's hardware SPI routines instead of software signalling.

As someone who has done very little with low-level electronics and who didn't know what SPI even was until today, this was embarrassingly easy. Kudos to the excellent Arduino libraries and the great documentation for making this simple.

I'm posting the re-written example for CIRC-05 here. It has the original software-based control as well as support for SPI, so you can see how similar the methods are.

(BTW, if you were wondering what a "latch" is in the IC, see this example.)

SparkFun Inventors Kit CIRC-03 - Motor not working (spinning)? It's an error in the instructions

I've started playing with some basic tutorial/toy electronics stuff using the Arduino platform and the "SparkFun Arduino Inventors' Kit" (hardly "inventors'", but anyway...) after picking it up as part of an order from the awesome outfit Little Bird Electronics. While generally good, I've hit an interesting issue with the kit that's worth documenting for anyone else who has it.

The short version: If you're using the SparkFun kit that specifies a 10kΩ resistor and the test circuit doesn't work (the motor won't spin) you might need to use a lower valued resistor between the transistor and pin 9 of the Arduino board.

If this is the case, you'll find that when you flick the motor's drive around with your fingers so it spins, sometimes it'll spin down slowly and sometimes it'll stop suddenly, depending on whether the Arduino is currently trying to drive it or not.

Check for all the usual errors before assuming the issue described here is what's wrong with your circuit. You might've reversed the flyback diode, made a poor connection on a power rail, etc etc.