Now Reading
A Crash-Proof File System for Your Computer May Soon Be a Reality

A Crash-Proof File System for Your Computer May Soon Be a Reality

by Samburaj DasAugust 24, 2015

MIT researchers will soon present the first computer file system that is guaranteed to remain uncorrupted and retain data during a crash, potentially changing modern day computer file systems for the better, the MIT news office revealed.

The file system is a fundamental part of any operating system in a computer that helps write data to the hard disk and allows indexing and searching for data stored on a disk.

In the event of a computer crash during the transfer or writing of data onto a hard drive, the file system can become corrupt. That applies to any modern day computer file system including NTFS & FAT32 (Windows), HFS+ (OS X), XFS (Linux).

At the expense of pace set by today’s file systems, Massachusetts Institute of Technology (MIT) researchers plan to present the first file system that is guaranteed corruption-free at the ACM Symposium on Operating System Principles this October.



Despite being slower than the file systems we’re used to, the tests and techniques used by researchers to verify the file system’s performance can lead to better, sophisticated designs. Moreover, formal verification after its presentation would help develop efficient and reliable file systems, the MIT News Office reports.

Nickolai Zeldovich, an associate professor of computer science and engineering at MIT and one of three contributing professors to the technology said:

What many people worry about is building these file systems to be reliable, both when they’re operating normally but also in the case of crashes, power failure, software bugs, hardware errors… Making sure that the file system can recover from a crash at any point is tricky because there are so many different places that you could crash.

Putting it to the Test

The three professors were able to test and establish the functional reliability of their file system by adopting a process routinely known as ‘formal verification.’

Formal verification is a complicated process, and yet the MIT researchers’ work is remarkable because they inherently prove the working properties of the file system’s final code.

The method of formal verification entails the setting of acceptable mathematical bounds of operation for a computer program before proving the program will not exceed these boundaries. Due to its complicated structure, formal verification usually applies to advanced schematic representations of a program’s functionality. When these schematics are translated to working code, complications that aren’t addressed by the proofs are a certainty.

“All these paper proofs about other file systems may actually be correct, but there’s no file system that we can be sure represents what the proof is about,” confirmed Daniel Ziegler, an undergraduate at MIT who also worked on the file system.

When writing and testing the file system repeatedly, the researchers retooled the system specifications over and again to prove the file system’s reproducibility.

“No one had done it,” adds Frans Kaashoek, a professor at MIT’s Department of Electrical Engineering and Computer Science (EECS). “It’s not like you could look up a paper that says, ‘This is the way to do it.’ But now you can read our paper and presumably do it a lot faster.”

Google’s Ulfar Erlingsson, the lead manager for security research observed the MIT researchers’ work remotely and had this to say:

…this is stuff that’s going to get built on and applied in many different domains. That’s what’s so exciting.

Images from Shutterstock and MIT.

Advertised sites are not endorsed by us. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
What's your reaction?
Love it
Hate it
  • Thomistic Cajetan

    Awesome stuff! Yeah, I am wondering whether this will work in BSD. I know I have had several file system crashes that are difficult to recover, we will see, but this is definitely exciting stuff.