Friday, May 30, 2008
Java without Java, part I
Java language has been there for a while, and various "alternative" implementations, proprietary and open source, flourished. Microsoft announced recently that it is dropping its Visual J++ implementation (which is part of Visual Studio 2005 but not 2008), but in open source world, nothing could be "dropped" as such, so using Java without Sun's implementation was and is an option.
In this 2-part post, I am going to review some options to run relatively simple Java programs on Windows without using Sun's Java VM. First part is devoted to using gcj to compile and run Java code.
It is of course relatively straightforward to actually compile Java files (to .class files or natively), and Java front-end to Gnu Compiler Collection has been known for a while as "gcj"; it is much more demanding and challenging task to implement substantial piece of Java library; this activity has been ongoing as "classpath project". Standard GNU gcc distribution includes a slightly modified version of classpath, which is compiled into library "libgcj.a/.so"
Cygwin installation gladly offers gcj version 3.4.6, along with corresponding library. Furthermore, this program passes standard "Hello, World!" test:
public class Hello { static public void main (String[] argv) { System.out.println ( "Hello, World!" ); } }
here is how to compile and run :
> /usr/bin/gcj --main=Hello Hello.java -o Hello > ls -l Hello.exe -rwxrwxrwx 1 user mkpasswd 6359272 May 30 16:45 Hello.exe* > ./Hello.exe Hello, World!
Unfortunately, any attempt to move substantially beyond "Hello, Word!" kind of tests result in run-time failure like this:
> ./Test.exe Exception in thread "main" java.lang.Error: Not implemented <<No stacktrace available>>
without even a clear indication of what is not implemented.
Worse, resulting Cygwin executables while being statically linked against "libgcj.a" (thus hefty 6MB for a trivial program) still depend on Cygwin DLL "cygwin1.dll"; compile option "-mno-cygwin" which is meant to generate executable independent of Cygwin DLL and does so for C and C++ files (so-called MinGW mode) fails even for aforementioned "Hello, World!" example (compilation succeeds, but executable crashes with no error messages).
Well, of course, version 3.4.6 is already quite old. What if one tries to download latest GCJ and compile under Cygwin?
In fact, one will quickly discover that Cygwin support is all but dropped from GCJ. You can build and install a compiler itself, but Java library is specifically marked as "not supported" in GCC configuration script:
*-*-cygwin*) target_configdirs="$target_configdirs target-libtermcap target-winsup" noconfigdirs="$noconfigdirs target-gperf target-libgloss ${libgcj}"
If one tries to remove this restriction and force compilation, it proceeds for a while and then hangs indefinitely while trying to execute
gcj-dbtool -n classmap.db
Another option to try is MinGW distribution. However, they currently support gcc-3.4.5, with GCC 4 being in "alpha" testing. So this option is not yet viable.
Situation is not completely hopeless, though: there is some nice guy Mohan Embar who makes his own distributions of MinCW/GCJ, including latest versions. This kind of works, but not without some caveats.
For the purpose of this slightly-more-complex-than-Hello-World testing, I used the following Java program "Test.java":
import java.util.*; import java.util.regex.*; public class Test { static private Pattern p=Pattern.compile ("\\s*(\\d+)\\s*\\+\\s*(\\d+)\\s*$"); static public void main (String[] argv) { String arg = join ( Arrays.asList(argv), " " ); Matcher m = p.matcher ( arg ); if (m.lookingAt()) { int a = (new Integer(m.group(1))).intValue(); int b = (new Integer(m.group(2))).intValue(); System.out.println ( a + " + " + b + " = " + (a + b) ); } else System.out.println ( "Could not parse!" ); } public static String join(Collection s, String delimiter) { StringBuffer buffer = new StringBuffer(); Iterator iter = s.iterator(); while (iter.hasNext()) { buffer.append(iter.next()); if (iter.hasNext()) buffer.append(delimiter); } return buffer.toString(); } }
If you download this distribution, unzip it, set your $PATH and compile Test.exe as before, you will get the following failure trying to run it:
Exception in thread "main" java.lang.ExceptionInInitializerError at java.lang.Class.initializeClass(/datal/gcc/gcc/libjava/java/lang/Object.java:513) at gnu.java.util.regex.RE.getLocalizedMessage(/datal/gcc/gcc/libjava/classpath/gnu/java/util/regex/RE.java:262) at gnu.java.util.regex.RESyntax.(/datal/gcc/gcc/libjava/classpath/gnu/java/util/regex/RESyntax.java:345) at java.lang.Class.initializeClass(/datal/gcc/gcc/libjava/java/lang/Object.java:513) at java.util.regex.Pattern. (/datal/gcc/gcc/libjava/classpath/java/util/regex/Pattern.java:76) at java.util.regex.Pattern.compile(/datal/gcc/gcc/libjava/classpath/java/util/regex/Pattern.java:153) at java.util.regex.Pattern.compile(/datal/gcc/gcc/libjava/classpath/java/util/regex/Pattern.java:135) at Test. (ccZgp08ajx:0) at java.lang.Class.initializeClass(/datal/gcc/gcc/libjava/java/lang/Object.java:513) at Test.main(ccZgp08ajx:0) Caused by: java.util.MissingResourceException: Bundle gnu/java/util/regex/MessagesBundle not found at java.util.ResourceBundle.getBundle(/datal/gcc/gcc/libjava/java/util/ResourceBundle.java:372) at java.util.ResourceBundle.getBundle(/datal/gcc/gcc/libjava/java/util/ResourceBundle.java:243) at gnu.java.util.regex.RE. (/datal/gcc/gcc/libjava/classpath/gnu/java/util/regex/RE.java:133) at java.lang.Class.initializeClass(/datal/gcc/gcc/libjava/java/lang/Object.java:513) ...9 more
The problem appears to be in static linking (that is, linking with libgcj.a rather than loading libgcj.dll runtime). Java is capable of loading classes dynamically, while static linking can only link-in objects (classes) which are referenced compile-time; hence any implementation which is using dynamic loading is doomed to fail. Therefore, for the time being "static" linking is not "officially" supported by GCJ team, and on Unix shared linking is the default. On Windows, however, most of GCJ distributions don't support DLL linking (Mohan Embar claims that his GCC/GCJ 3.4 distribution supports DLL linking, but it didn't work for me, and even if it did, it useless in this old version anyway)
In this case we are loading not a class, but so-called "resources", specifically message file with various messages associated with "regular expression" errors. If you have access to this file (which is included in any GCJ source distribution), you can link in this resource manually like this:
> gcj --resource gnu/java/util/regex/MessagesBundle.properties -c \ path\gcc-version\libjava\classpath\resource\gnu\java\util\regex\MessagesBundle.properties \ -o MessagesBundle.properties.o > gcj --main=Test MessagesBundle.properties.o Test.java -o Test > ./Test.exe 2+2 2 + 2 = 4
As you could see this actually works (!!!), though size of Test.exe is 38 Mbytes (!!!) (you can use utility "strip" to get it down to approx. 12M)
Unfortunately, attempt to compile and run any of Swing UI demos results in failure:
> .\DynamicTreeDemo.exe Exception in thread "main" java.awt.AWTError: Cannot load AWT toolkit: gnu.java.awt.peer.gtk.GtkToolkit at java.awt.Toolkit.getDefaultToolkit(/datal/gcc/gcc/libjava/classpath/java/awt/Toolkit.java:544) at java.awt.EventQueue.invokeLater(/datal/gcc/gcc/libjava/classpath/java/awt/EventQueue.java:316) at javax.swing.SwingUtilities.invokeLater(/datal/gcc/gcc/libjava/classpath/javax/swing/SwingUtilities.java:950) at DynamicTreeDemo.main(D:/USERPR~1/IGNATI~1.PTC/LOCALS~1/Temp/ccCwbaaajx:0) Caused by: java.lang.UnsatisfiedLinkError: gtkpeer: can't open the module at java.lang.Runtime._load(/datal/gcc/gcc/libjava/classpath/java/awt/event/WindowEvent.java:309) at java.lang.Runtime.loadLibrary(/datal/gcc/gcc/libjava/java/lang/Runtime.java:656) at java.lang.System.loadLibrary(/datal/gcc/gcc/libjava/java/lang/System.java:515) at gnu.java.awt.peer.gtk.GtkToolkit.(/datal/gcc/build/wingcc_build/i686-pc-mingw32/libjava/gnu/java/awt/peer/ gtk/GtkToolkit.java:145) at java.lang.Class.initializeClass(/datal/gcc/gcc/libjava/classpath/java/awt/event/WindowEvent.java:309) at java.lang.Class.forName(/datal/gcc/gcc/libjava/classpath/java/awt/event/WindowEvent.java:309) at java.awt.Toolkit.getDefaultToolkit(/datal/gcc/gcc/libjava/classpath/java/awt/Toolkit.java:561) ...3 more
Don't know if this is fixable with GCJ, but in fact it is possible to run some Swing UI demos "natively" on Windows; we will consider how to do this in Part II.
Tuesday, May 20, 2008
32-bit MPlayer on 64-bit Linux, again
Almost exactly one year ago, I published here an essay on building 32-bit mplayer on 64-bit RHE Linux (took me a lot of time when, actually). Reading this over a year later, I can hardly understand how I really managed to do it and how the process worked; it must have been some magic which mplayer configuration did. Anyway, here is much more straightforward and comprehensive approach on building not just mplayer, but whole 32-bit Linux (sub-) system within existing 64-bit OS. It is based on Ubuntu 8.04 "Hardy Heron" , but should work with small modifications on any Debian-based system.
The following largely based on this forum post, which in turn references this tutorial. I will assume that code name of your release is "hardy" and 32-bit root is "/sys32"; make sure to replace with values suitable for your system.
- sudo apt-get install dchroot debootstrap
- sudo mkdir /sys32/
- sudo vi /etc/dchroot.conf
- Add this line: hardy /sys32
- sudo debootstrap --arch i386 hardy /sys32/ http://archive.ubuntu.com/ubuntu
- sudo cp /etc/{passwd,shadow,group,sudoers,hosts} /sys32/etc/
- sudo cp /etc/apt/sources.list /sys32/etc/apt/sources.list
(At this moment you might want to edit your sources.list file to your taste, or remove from there things that are specific to 64-bit system, not that there should be any)
Now comes some controversial step. Full-blown debian installation could take gigabytes of disk space, and only small piece of that is binary data which is different for 32-bit. You can reduce the required disk space by "sharing" certain folders with architecture-independent data between the main system and sub-system, but that has also a site-effect of making certain actions in the sub-system break how things work in your main system. Forum post that I referenced above recommend you share /usr/share/fonts this way, but I decided against it; it costs me about 200Mb more, but adds certain piece of mind. Some directories though are worth sharing.
- sudo vi /etc/fstab
- Add the following lines:
- /home /chroot/home none bind 0 0
- /tmp /chroot/tmp none bind 0 0
- /dev /chroot/dev none bind 0 0
- /proc /chroot/proc proc defaults 0 0
- /media/cdrom0 /chroot/media/cdrom0 none bind 0 0
- sudo mkdir /chroot/media/cdrom0
- sudo mount -a
OK, by now you should have a basic Debian-style 32-bit system under /sys32; this command
- sudo chroot /sys32/
would switch you to operate from "within" this sub-system. From this point on, we continue setup from within (note that "chroot" automatically puts us into root-privileged shell, so "sudo" isn't needed)
- locale-gen "en_US.UTF-8"
- dpkg-reconfigure locales
- apt-get update
- apt-get upgrade
- apt-get install libgtk2.0-dev gcc g++ make
The last command is just an example; feel free to install whatever you feel like you need, including software built from source, or anything else. MPlayer, among other things, could be built this way without any problems.
Note that when installing from source, it may be useful to do everything but the last installation step from your regular user account, like that:
- [Download all source you might want]
- sudo chroot /sys32/
- su - your_user_account
- cd /tmp
- tar -xvzf your_sources.tag.gz
- cd your_sources
- ./configure
- make
- make test
- exit
- cd /tmp/your_sources
- make install
One last remark: you can install some 32-bit support as part of your 64-bit system, including compiler and run-time libraries (you can't however get build libraries and headers other than building them yourself following steps similar to outlined above). Here it is:
- apt-get install build-essential lib32gcc1 libc6-dev-i386 libc6-i386 lib32stdc++6 ia32-libs gcc-multilib
This makes it possible to run gmplayer as /sys32/usr/local/bin/gmplayer, without any change in environment or any dependencies on the 32-bit subsystem.
Labels: 64-bit, debian, mplayer
Monday, May 19, 2008
FAT32 file system in Solaris 10
Despite many Unix-type file system in existence, it is still problematic to have a single file system which is well compatible with both traditional BSD-based Unix systems, like OpenSolaris (which use older ufs or newer ZFS), and modern Linux distributions (using ext2 or ext3). Though it is technically possible to both mount ufs under Linux and ext3 under Solaris, in neither case support is enabled by default and it may not be very reliable.
That leaves the only file system type which is shared by all Unix/Linux computers without need of any hacking or kernel recompilation as FAT32 from Microsoft. All of its inherit limitations non-withstanding, FAT32-formatted USB flash cards are usually immediately understood and mounted with full read/write access by all modern Unix and Linux distributions.
The only things is, creating such file systems is not exactly simple. For starters, since Windows 2000 operating system, Microsoft does not provide an option to (re-) format driver as FAT32 if its size exceeds 32 GB, despite the fact and the file system itself has no such limit, so, funny as it is, to create new FAT32 drives one needs either older Windows OS, like Windows 98, or, a Unix/Linux computer.
Debian (and I guess all other modern Linux systems) allows you to create FAT32 file system with one simple command like that (you need to have dosfstools installed first):
mkdosfs -F 32 /dev/sdb1
(you can also use 'fdisk' prior to that to create DOS partition table, but I am not sure it is necessary). Drives so formatted can be then accessed from Windows computer with not trouble as any other FAT32 drive.
With Solaris though, things aren't exactly that simple. I assume that it is being somewhat too picky about existence of "valid" DOS-style "primary" partition table (remember this old DOS logic of four "primary" partitions C,D,E,F and then the rest, if any, being "extended" partitions?). While I am sure it must be somehow possible to convince fdisk and mkdosfs under Linux to create something Solaris will understand, here I present an opposite approach of creating FAT32 system under Solaris.
All operations are executed under "root".
First off, Solaris has so-called "volume management" which (I guess) is charged with auto-mounting user disks. All instructions related to any manipulation with disk devices and mounts outside of services provided by "volume management" insist that you shut down volume management server upfront with this command:
/etc/init.d/volmgt stop
and then restart it with "/etc/init.d/volmgt start" when you're done.
Next, after you plugged it/installed your drive to be formatted, this command:
iostat -En
will help you to identify correct device name for your disk. My external USB hard drive "Free Agent" from Seagate, for example, is reported like that:
c2t0d0 Soft Errors: 86 Hard Errors: 0 Transport Errors: 0 Vendor: Seagate Product: FreeAgent Pro Revision: 400D Serial No: Size: 750.16GB <750156374016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 86 Predictive Failure Analysis: 0
Next, you run Sun's fdisk to (interactively) create DOS partition table, like that:
fdisk /dev/rdsk/c2t0d0p0
(note that this has to be "character special" file from /dev/rdsk, and p0 (meaning "first primary partition") appended to the name)
Finally, this command generates file system:
mkfs -F pcfs -o 'fat=32' /dev/rdsk/c2t0d0p0:c
(Note ":c" appended to the name, corresponding to the traditions DOS assignment of drive letter "C" to the 1-st partition)
You can now mount your system with this command:
mount -F pcfs /dev/dsk/c2t0d0p0:c /myusbdrive
(Note that this time we use "block special" file from /dev/dsk)
Labels: solaris
Thursday, May 15, 2008
Using NFS mounts under Windows
When one needs to access files on remote Linux workstation from a windows computer, there are two obvious ways in which this can be done: run Samba server on Linux or NFS client on Windows. While former approach is I guess by far more popular, here I will consider later one: sharing files via NFS client/server.
First, we need to install a NFS client. I don't know of any free one, and perhaps the best known and reliable commercial solution is DiskAccess from Javvin Technologies. Installation is straightforward.
Problems begin though when it turns out that in order to access NFS file system with DiskAccess one needs certain authentication configuration setup. This could be and is done in a corporate environments with NIS (formerly Yellow Page server).
However, when we only have two machines that need to talk to each other, this might be an overkill and a disaster to setup. This is perhaps why there is another option in aforementioned dialog, "PCNFSD Server". What is that?
As the name suggests, this is perhaps mini-daemon intended to facilitate communication between "PC" and NFS server. This sounds good, except that this utility is barely known even to Google, isn't in any Linux distribution, and as a matter of a fact "canonical" version from SUN is perhaps older than Linux itself.
Fortunately, one kind soul invested necessary effort to port this to Linux
- Get it here: http://ftp.linux.org.uk/pub/linux/Networking/attic/Other/pcnfsd/linux_pcnfsd2.tgz, untar and unzip into a new directory;
- Edit file common.h to uncomment "#define SHADOW_SUPPORT"
- Make other changes necessary to build successfully. On RHE4, I had to define "LIBS= -lcrypt" in Makefile.linux;
- make -f Makefile.linux
- Run linux/rpc.pcnfsd as root. No configuration is required.
Fast, simple, and keeps DiskAccess happy.
Of course, what remains to be done is to "export" a directory and "mount" it under Windows.
To export directory /ext/user on RHE4:
- Add line like this:
/ext/user 192.168.2.1/255.255.252.0(sync,rw)
to file /etc/exports (provided that this IP subnet is an accurate description of your local net, this will export directory in read-write mode to the local file system only. You can substitute '*' for IP range if security is not on the top of your priorities list); - Reset list of exported directories with /usr/sbin/exportfs -r or /usr/sbin/exportfs -a ;
- While this is supposed to be enough, you may need to restart NFS server with command like this: /etc/rc.d/init.d/nfs restart
To mount exported NFS directory under Windows,
- Once after DiskAccess installation, go to control panel, select DiskAccess item and enter your credentials, and set other options as you see fit;
- You can now mount with regular Windows UI or with command like this:
net use R: \\nfs_host\ext\user
You shouldn't be forced to use drive letter, but for some reason it didn't work for me without that.
Labels: linux, NFS, server, windows
Wednesday, May 14, 2008
Using rsync for backup, part I
Huge hard drives are cheap these days, and almost always they come bundled with some fancy back-up solution; not at all surprising, given that it is precisely what majority of customers are going to use new disk space for.
Interestingly enough, there aren't that many good backup utilities for Linux, or open source tools of this sort in general. Most likely this is because people still use "rsync" for Linux, Unix, and multi-platform backups.
"rsync" is very far from being "perfect" backup tool (more on that below), but at least is is old, stable, reliable, simple to use and available for all kinds of platforms, including Cygwin and native Windows port.
Therefore, we will begin by reviewing the most basic ways to use rsync for backups.
Note that it seems like these blatant deficiencies caused rsync maintainers in the last few years to introduce new options which are supposed to help properly implement backup operation. At this moment though, I don't want to get into this; the following tips are based on rsync version 2.6.3 (protocol version 28, released in 2004) or higher.
First, we need to start and configure rsync server on a dedicated server. Here is the plan.
Server configuration (everything is executed as "root"):
- Decide whether you want to run rsync daemon through inet (most of servers are used this way) or as a stand-alone server (like apache for example). The advantages of either approach are outlines in this passage:
If you start off the rsync daemon through your inet daemon, then you incur much more overhead with each rsync call. You basically restart the rsync daemon for every connection your server machine gets! It's the same reasoning as starting Apache in standalone mode rather than through the inet daemon. It's quicker and more efficient to start rsync in standalone mode if you anticipate a lot of rsync traffic. Otherwise, for the occasional transfer follow the procedure to fire off rsync via the inet daemon. This way the rsync daemon, as small as it is, doesn't sit in memory if you only use it once a day or whatever. Your call.
- Let's assume for the following that you like myself are going to run rsync via inet. All modern Linux distributions have necessary hookup already done for you, and all you need to do is to open appropriate UI and enable rsync server. However, just in case, here is command-line configuration instructions from rsyncd.conf manual page:
When run via inetd you should add a line like this to /etc/services:
rsync 873/tcp
and a single line something like this to /etc/inetd.conf:
rsync stream tcp nowait root /usr/bin/rsync rsyncd --daemon - Create config file /etc/rsyncd.conf like that (it is assumed that "/ext/BACKUP" is the root of your backup area):
log file = /ext/BACKUP/log/rsyncd.log pid file = /var/run/rsyncd.pid lock file = /var/run/rsync.lock secrets file = /etc/rsyncd.scrt auth users = rsync read only = no transfer logging = yes list = yes [MyModule] path = /ext/BACKUP/mymoduledir comment = Description of what this backup location is for
Create all necessary sub-directories of /ext/BACKUP (including "log") and change ownership of all "modules" directories like /ext/BACKUP/mymoduledir to 'nobody':#chown nobody /ext/BACKUP/mymoduledir
- Create password file /etc/rsyncd.scrt with one line only which looks like that:
rsync:password
('rsync' can be any 'user' name, see 'auth user' configuration option above).
Change access mode of this file to 0600 :
#chmod 0600 /etc/rsyncd.scrt - If external access (from outside of your local network) is required, you can change default port 873 to something else and/or open this port in your firewall or router.
Client configuration (executed by any user who has read access to files being backed up)
- Create file /etc/rsync.p (if acting as root) or ~/.rsync.p (if regular user) with just one line which is password from server configuration section. Change its mode to 0600.
- You can now start backup job by executing this command:
rsync -az path_1 ... path_N --password-file /etc/rsync.p rsync@server::MyModule
Where path_1 ... path_N are all directories you want to copy to MyModule backup location.
Note however that (a) rsync attaches a special meaning to paths which end with a slash, like '/home/user/', basically interpreting them as '/home/user/*'; this is most useful when backing up just one path, and (b) for every path in a list, special sub-directory will be created under /ext/BACKUP/mymoduledir, which corresponds to the last component of the path (after special treatment of end-slashes is taken into account, of course). It is your responsibility to make sure these do not overlap. - If you want to automate regular backups, create a cron job like that:
30 2 * * * /usr/bin/rsync options as above - As another example, under Cygwin you use command like this to back up all files in all drives while excluding some Windows directories:
rsync -az /cygdrive/c /cygdrive/d \
--exclude 'System Volume Information/' --exclude /c/WINNT/ \
--delete-excluded rsync@rsync.myhomeserver.com::laptop
Here is why this approach, while useful, need certain rework to provide true backup solution:
- There is no way to save to the server file owner information. All files are saved as created by "nobody" (you can change this default, but you cannot force server to save original file ownership data);
- If directory (in the original location) has restricted access, server might not be able to back up files within this directory (you can force server to relax permissions, but then you would have lost original permissions info);
- While detailed log file can be saved (to this end we suggested option 'transfer logging' above), there are no (good) tools to analyze it and report backup failures, especially for backups which are run in the background by cron.
service rsync { disable = no socket_type = stream wait = no user = root server = /usr/bin/rsync server_args = --daemon log_on_failure += HOST instances = 2 }(from here) and do "/etc/init.d/xinetd restart".
Monday, May 12, 2008
Fixing Perl installation under Cygwin
I discussed on multiple occasions before various issues arising from Cygwin upgrade, but there is one problem I fixed multiple times but never cared to document properly.
Perl built-in extension installation tools rely on existence and/or executability of certain files; this is tested with Perl test operators like 'if -r <filename>". As specified in the documentation, this operator tests that "File is readable by effective uid/gid.". No doubt that this test simply matches file permissions as reported by 'stat'' system call with file owner and current user id.
This seems sane enough, but the trouble is, Windows-centric notions of "Administrative rights" cannot be reliably mapped to Cygwin world. I always work in Windows as "myself," but with full Administrative rights, but Cygwin cannot possibly appreciate that I can access each file as if I were "Administrator" even though I am not one. As a result, multiple files which are created with permissions like that:
-rwxrwx---+ 1 Administrators SYSTEM 470528 May 1 2007 /usr/bin/bash.exe*
fail Perl's "-r" test, unless of course one wants to actually login as "Administrator", though I can still read and execute them.
While it is beyond me why Cygwin should create such peculiar permissions, here is a quick patch which should resolve most, if not all, problems related to installing Perl extensions.
chmod a+x /usr/bin/perl.exe
cd /usr/lib/perl5/5.8/ExtUtils
chmod a+x,a+r typemap xsubpp .
cd /usr/lib/perl5/vendor_perl/5.8/ExtUtils
chmod a+x,a+r xsubpp .
Saturday, May 03, 2008
Best DVD player
DVD players are cheap these days, but not all of them are born equal. The best player is my humble opinion is DVP3960, which can play virtually all known types of video files, and nearly all DVD writable media.
The only think which is worth to know (and this is the reason I am publishing it here) is so-called "region-free" hack. Copying from http://forum.videohelp.com/topic331294.html,
Power Up the unit with NO Disc in the tray.
Open the tray
Press the SETUP Button on the remote control
Navigate to the PREFERENCES page using the Right Arrow Key
Press the DOWN ARROW one time to select
Press the 1 button on your remote control
Press the 3 button on your remote control
Press the 8 button on your remote control
Press the 9 button on your remote control
Press the 3 button on your remote control
Press the 1 button on your remote control
The current Region Code Setting will display
Use the UP/DOWN Arrow Keys to select the region required or '0' for All Regions
Press the PLAY Button on the remote control