Sun Solaris To Linux: Size Matters After All

Linux, Microsoft Windows Vista, and most Unix systems employ either 32-bit or 64-bit file systems, which for the next 10 years are likely to be able to handle the needs of the largest computer systems.

To understand how big 32-bit systems really are, consider that 20-bit systems, once considered state-of-the-art, can create one million unique addresses. A 24-bit system increases the number of virtual addresses to 16 million. The jump to 32-bit is a leap to four billion unique addresses, with the ability to retrieve a file or data stored at any of the locations.

The shift to 64-bit file systems, currently under way, represents a quantum jump that many experts believe will serve existing systems as long as they can run. But Sun is talking about a 128-bit file system, which it concedes is unlikely to be needed for the next 10 years, when 64-bit systems start to run out of steam.

In terms of addressable virtual memory, 128-bit systems represent "a very large number," says Chris Ratcliffe, director of marketing for Solaris 10. The number of unique addresses that can be created by a 128-bit system is something like the late Carl Sagan's answer to the question of how many stars are there--billions and billions.

id
unit-1659132512259
type
Sponsored post

Sun calls Solaris' new feature the Zettabyte File System. A zettabyte is one sextillion bytes--which is a very large number. Because of that, as Solaris 10 is upgraded with its next release in June, it will offer a file system "with virtually unrestricted capacity," says Ratcliffe. A computer running ZFS could address "all the disks currently on the planet," he says.

But Sun also says size doesn't matter.

What's really important, Ratcliffe says, is that ZFS will have built-in data integrity and disk volume management capabilities, instead of requiring its users to add on storage software that tries to overcome the file system's shortcomings.

Through its built-in checksum capability, ZFS will assign a 64-bit checksum or bit count associated with data at a particular address. If the actual bit count doesn't match the checksum, the system has detected what is probably corrupted data, and recovery measures can be implemented. The checksum approach gives data stored by ZFS a data integrity rate close to 100% or "19 nines," in Ratcliffe's words.

ZFS will also include a copy-on-write mechanism that allows a backup copy of data to be available the instant a data error is detected, he says.