Harnessing machine learning to make managing your storage less of a chore

As far as we know, none of the storage vendors using AI have gone <a href='https://arstechnica.com/science/2019/07/brains-scale-better-than-cpus-so-intel-is-building-brains/'>neuromorphic</a> yet—let alone biological.

While the words “artificial intelligence” generally conjure up visions of Skynet, HAL 9000, and the Demon Seed, machine learning and other types of AI technology have already been brought to bear on many analytical tasks, doing things that humans can’t or don’t want to do—from catching malware to predicting when jet engines need repair. Now it’s getting attention for another seemingly impossible task for humans: properly configuring data storage.

As the scale and complexity of storage workloads increase, it becomes more and more difficult to manage them efficiently. Jobs that could originally be planned and managed by a single storage architect now require increasingly large teams of specialists—which sets the stage for artificial intelligence (née machine learning) techniques to enter the picture, allowing fewer storage engineers to effectively manage larger and more diverse workloads.

Storage administrators have five major metrics they contend with, and finding a balance among them to match application demands approaches being a dark art. Those metrics are:

Throughput: Throughput is the most commonly understood metric at the consumer level. Throughput on the network level is usually measured in Mbps—megabits per second—such as you’d see on a typical Internet speed test. In the storage world, the most common unit of measurement is MB/sec—megabytes per second. This is because storage capacity is usually measured in megabytes. (For reference, there are eight bits in a byte, so 1MB per second is equal to 8Mbps.)

Latency: Latency—at least where storage is concerned—is the amount of time it takes between making a request and having it fulfilled and is typically measured in milliseconds. This may be discussed in a pure, non-throughput-constrained sense—the amount of time to fulfill a request for a single storage block—or in an application latency sense, meaning the time it takes to fulfill a typical storage request. Pure latency is not affected by throughput, while application latency may decrease significantly with increased throughput if individual storage requests are large.

IOPS: IOPS is short for “input/output operations per second” and generally refers to the raw count of discrete disk read or write operations that the storage stack can handle. This is what most storage systems bind on first. IOPS limits can be reached either on the storage controller or the underlying medium. An example is the difference between reading a single large file versus a lot of tiny files from a traditional spinning hard disk drive: the large file might read at 110MB/sec or more, while the tiny files, stored on the same drive, may read at 1MB/sec or even less.

Capacity: The concept is simple—it’s how much data you can cram onto the device or stack—but the units are unfortunately a hot mess. Capacity can be expressed in GiB, TiB, or PiB—so-called “gibibytes,” “tebibytes,” or “pibibytes”—but is typically expressed in more familiar GB, TB, or PB (that’s gigabytes, terabytes, or petabytes). The difference is that “mega,” “giga,” and “peta” is a decimal counting system based on powers of ten (so 1GB properly equals 1000^3 bytes, or exactly one billion bytes), whereas “gibi,” “tebi,” and “pibi” is a binary counting system based on powers of two (so one “gibibyte” is 1024^3 bytes, or 1,073,741,824 bytes). Filesystems almost universally use the powers of two (standard scientific notation), whereas storage device specifications are almost universally in powers of ten. There are complex historical reasons for the different ways of reckoning, but one reason the different capacity reckoning methods continue to exist is that it conveniently allows drive manufacturers to over-represent their devices’ capacities on the box in the store.

Security: For the most part, security only comes into play when you’re balancing cloud storage versus local storage. With highly confidential data, on-premises storage may be more tightly locked down, with physical access strictly limited to only the personnel who work directly for a company and have an actual need for that physical access. Cloud storage, by contrast, typically involves a much larger set of personnel having physical access, who may not work directly for the company that owns the data. Security can be a huge concern of the company that owns the data or a regulatory concern handed down from overseeing bodies, such as HIPAA or PCI DSS.

Enterprise administrators face an increasingly vast variety of storage types and an equally varied list of services to support with different I/O metrics to meet. A large file share might need massive scale and decent throughput as cheaply as it can be gotten but also must tolerate latency penalties. A private email server might need fairly massive storage with good latency and throughput but have a relatively undemanding IOPS profile. A database-backed application might not need to move much data, but it might also require very low latency while under an incredibly punishing IOPS profile.

If we only had these three services to deploy, the job seems simple: put the big, non-confidential file share on relatively cheap Amazon S3 buckets, the private mail server on local spinning rust (that’s storage admin speak for traditional hard disk drives), and throw the database on local SSDs. Done! But like most “simple” problems, this gets increasingly more complex and difficult to manage as the number of variables scale out. Even a small business with fewer than fifty employees might easily have many dozens of business-critical services; an enterprise typically has thousands.

With thousands of services competing for resources with differing performance and confidentiality targets—some long-running, others relatively ephemeral and likely only being up for days or weeks at a time—management of the underlying storage rapidly outpaces the human ability to make informed and useful changes. Management effort quickly falls back to best-effort, “shotgun” approaches tailored to the preferences of the organization or department—spend too much but get high performance and/or minimal maintenance requirements in return; or gamble on cheaper services, hoping that the cost savings outweigh penalties in missed performance targets or increased IT payroll.

[“source=arstechnica”]