feature article
Subscribe Now

MRAM Momentum

IEDM happened last month. If you haven’t been there, it’s the go-to place for the most detailed work done on the most advanced and obscure ideas for making ever-smaller electronics. There are a million things covered, most requiring a fair bit of detailed knowledge to be understood. Meaning that no one could grasp all of the papers being presented.

For those of us non-practitioners, add the practical reality of non-English speakers (can’t complain about that in a global industry – I’m lucky they’re using my language) speaking into microphones in giant echoey rooms that garble the sound, combined with slides having 12-point type projected onto screens about the size of the one your grampa used to use after his camping trips, and yeah…  it does a number on your comprehension.

Which means you come back home and refer back to the highlights already assembled for you by the IEDM session chairs (a very good thing) and use that to drill into the papers that you now have in order to figure out what, out of the mountain of info, to cover.

And, despite interesting things going on elsewhere in the world of memory, the award goes to… MRAMs. We looked at basic MRAM technology a couple of years ago, and we’ve even explored its possible use as logic. Some critical developments were unfolded at IEDM, so we’re giving it the nod this time.

Without rehashing too much, let’s review. Modern MRAMs use current through a tunneling barrier between two magnetic layers to decide whether the two layers are magnetically parallel or anti-parallel. One magnetic layer is fixed or “pinned,” and the other is “free” and has its polarity switched during the write process. Because the resistance of the tunneling current depends on the state of the free layer, a current read allows this structure to implement a memory bit.

You’ll see the phrase “spin-torque transfer” (STT) used to indicate that, unlike other MRAM technologies, these devices use the spin of the write current to coerce the free layer into submission.

The big challenge at hand is to create large arrays of cells that can be reliably written and read for long periods of time. This means careful control of programming distributions (it’s one thing to write one cell, but millions with varying and overlapping distributions? Not so easy…). And for these memories to displace other memories that might be nearing their scaling limits, they need to compete in speed and power. And they themselves need to scale. And they need to be thermally stable: having contents altered by the temperature of the surrounding environment is likely to be unacceptable.

All of the current papers focus on magnetic tunnel junctions (MTJs) that exploit “perpendicular magnetic anisotropy” (I dare you to say that ten times fast while sober), referred to as pMTJs. The idea is to use a free layer that can be programmed by currents routed through the junction by the read transistor rather than by currents in the plane of the free layer. This helps scaling, although it means that the read/write control transistor, which used to handle only read currents, now needs to handle write currents too, making it bigger. Nonetheless, the final cell scales better than older structures.

For that reason, as well as for the overriding need to reduce power, much effort is being placed on reducing the write current, which dominates the overall power dissipation of an MRAM.

In optimizing these cells, researchers are trying to manage four critical voltage distributions: the read voltage; the “critical” voltage VC at which switching happens; the actual engineered switching voltage, which must be larger than the VC of any given cell to ensure reliable writing of any and all cells being written; and the breakdown voltage, at which point irreparable damage to the tunneling barrier can occur.

Unfortunately, the breakdown energy drops over time given cumulative biasing, meaning that a lot of margin – like 30? in one experiment – is needed to ensure that it stays high enough for long enough to give it respectable endurance. The breakdown is proportional to cell area, but not so dependent on the cell shape, suggesting the use of circular cells. Thickness also helps to push out the breakdown, but it does so at the expense of raising VC. Fortunately, the endurance has an exponential dependence on the breakdown energy, so, for example, in one study, a modest increase from 60 to 80 kBT added eight orders of magnitude to the endurance.

The material CoFeB for the free layer, originally chosen for its high resistance, perpendicular anisotropy at the interface, and manufacturability, also gives the high breakdowns needed. While some teams appear to continue exploring materials, CoFeB with a MgO tunneling layer is by far the most common pairing I saw.

The fixed layer appears to be converging on the use of “synthetic anti-ferromagnetic” (SAF) materials. These are carefully-deposited stacks of films having differing magnetic characteristics, perhaps with some spaces as well; they’re built and annealed to provide very precisely-engineered magnetic characteristics. They’ve been around for a while for other applications, but they feature in a number of this year’s MRAM papers.

Some presenters focused on very specific problems. One of those is the issue of so-called stray fields. While most of the fixed-layer field lines are perpendicular to the junction, you get field lines at the ends of the layer that are in-plane as well. The solution here is to make the fixed layer larger than the pinned layer so that those field lines don’t interfere with the free layer; this creates a so-called “stepped structure.”

But “stray-field engineering” continues; effectiveness depends partly on etch processes that must be controlled to avoid damage to the pinned layer; a team at ITRI showed how this could be done. But they also found that the in-plane stray field component can actually help with switching, so there’s more work to be done to refine the stack and how it’s built.

In other projects:

  • A team at UCLA et al showed how the anisotropy at the CoFeB/MgO interface can be reduced by applying an electric field. This raises the prospect of using a voltage to write (or at least to assist in writing).
  • A UCB team is looking at using the spin Hall effect for writing in a further attempt to reduce current.
  • And a Japanese team (in a classic example of naming-for-the-acronym: Low-power Electronics Association and Project, or LEAP) showed how adding a second “dummy” free layer (resulting in a double junction) could improve thermal stability and lower write current.

But what happens when you try to put it all together and make something resembling a production device rather than a research device? Two papers took on such grander projects. Everspin (a 2008 spin-out from Freescale) created a 64-Mb DDR3 device with x4, x8, and x16 configurations that could operate at 1.6 GT/s. Careful materials “optimization” (details undivulged) was required to minimize defects and keep error distributions Gaussian.

And Toshiba replaced the SRAM cache on a mobile-oriented CPU with MRAM as a demonstration vehicle. This wasn’t just a matter of swapping things out to prove out CMOS manufacturing: even though the MRAM cell doesn’t have the leakage path that the SRAM cell has, MRAMs have traditionally consumed more power than SRAMs due to their write currents.  So the write power had to be brought down, which was done by tackling both the write current and the write time – a first, according to Toshiba.

They were able to get the write time down to 3 ns, using 0.09 pJ to set the state of the 30-nm-diameter cell. They did this by reducing the “damping” of the free layer (the tendency of the free layer to resist the current spin, or at least that’s my simplistic take) so that less energy was required (less current for less time). They didn’t say in their paper which materials they used for the two magnetic layers; they used MgO for the barrier.

The result was to reduce cache power by 80%.

And so by steps both small and large, MRAM continues to tantalize folks as likely the most promising of the new memory technologies. We will be back if this trend continues.

 

More info:

If you have the proceedings, the papers described are in Session 23.

2 thoughts on “MRAM Momentum”

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
39,743 views