“Don’t look back. Something might be gaining on you.” – Satchell Paige
Last week’s RISC-V Tech Symposium, held at the Computer History Museum in Mountain View, had all of the spirit of a tent revival meeting, which is appropriate because microprocessor ISA supporters tend to sound a lot like religious zealots. The open-source RISC-V ISA is no different in this respect. Although begun for the most grounded of purposes – the education of future processor designers – the RISC-V ISA is rapidly growing far beyond its origins and is bumping along the road to become a full-blown, commercial microprocessor ISA despite its open-source underpinnings. The underwriter of the worldwide series of RISC-V Tech Symposiums, SiFive, is one of the loudest cheerleaders for the RISC-V ISA’s trek towards commercial relevance and, perhaps, world dominance.
My July 19, 2018 EEJournal article titled “RISC-V Aims for World Domination” contains a long history of microprocessor ISAs. In this article, I quoted Dr. David Patterson, the professorial godfather of RISC-V, who said: “Why should there be open-source compilers but not open-source ISAs?” That article quoted Patterson as he discussed the idea that the open RISC-V ISA could be a cure to microprocessor ISAs’ hacking vulnerability. The quote came from a presentation Patterson made at the 2018 annual dinner meeting of the IEEE-CNSV (IEEE Consultants’ Network of Silicon Valley). Patterson concluded that presentation with a simple statement about RISC-V: “That’s my simple goal for RISC-V: world domination.”
Fast forward half a year to this month’s RISC-V Tech Symposium at the Computer History Museum. Several hundred people attended, and they didn’t come just for the bagels and fruit. They came to hear about the new ISA religion. Although they got plenty of that sort of verbiage from the SiFive presenters at the symposium, who promised the world including a solution to the slowing of Moore’s Law, attendees got a more sober and rational look at the state of RISC-V from Martin Fink, who was giving his first presentation as Interim CEO of the RISC-V Foundation. Fink is also the CTO at Western Digital (WD), a position he’s held for just over two years.
As a reminder, WD announced late last year that it was developing a RISC-V processor core (not an ISA, an IP core) called SweRV, which the company is putting into the open-source community. In addition, WD has developed an open-source, RISC-V instruction-set simulator for the SweRV processor core, and the company has initiated an open standard initiative for cache-coherent memory over a network to be used in RISC-V environments. WD is serious about RISC-V. The company’s press announcement about the processor core states, “Western Digital expects both the SweRV Core and SweRV ISS will help to accelerate the industry’s move to an open source instruction set architecture.”
In other words, WD has bought into the open-source RISC-V movement bigly, which should not come as a surprise. WD ships on the order of a billion processor cores per year in its storage products, so an open-source core implementation that WD manages and pays no royalties to use has got to be attractive.
During his keynote at the RISC-V Tech Symposium, Fink recounted a story about the first time he met Marc Andreesen. This was when Fink was a VP running a “small business unit” at HP. It was the Business Critical Systems group in Fort Collins, Colorado – my old HP stomping grounds – which was responsible for HP’s Integrity servers at the time. Andreesen had just joined HP’s Board of Directors. His “tag line” was “Software is eating the world.” This tag line “irked the heck out of” Fink, because he considered HP to be a $100 billion hardware company, no doubt because he was in charge of HP’s Integrity server line at the time.
Now that he’s the [Interim] CEO of the RISC-V foundation, Fink has revised Andreesen’s tag line for his own use. He proposed the following version:
“Silicon: The oxygen that allows software to breathe”
So it’s not all about the software, says Fink. It’s about the symbiotic relationship between hardware and software. Maybe this is a transcendental revelation to some people, but it seems to me that British computer scientist Maurice Wilkes had the same vision in the late 1940s while he and his team were developing the EDSAC and EDVAC programmable computers. Hardware/software co-development is actually a very old concept in computing. Wilkes and company certainly understood the symbiosis between hardware and software back then, although silicon had nothing to do with computing at the time. It was still the tube era.
In any case, Fink sees the RISC-V movement as a way to get back to the Wilkes’ model of optimizing hardware and software together. According to Fink, putting the RISC-V ISA into the hands of the open source community and modularizing the ISA unlocks the processor architecture and encourages more innovation.
Further, Fink sees the RISC-V movement as a step away from the polarized world of hardware and software today, as if the processor designers at Intel were ignoring the massive overhang of existing code. (OK, they did exactly that when it came to Itanium, see “News Flash: Itanic Still Sinking” and “Itanium Deathwatch Finally Over,” but, in general, processor designers pay a lot of attention to how well software runs on their machines.) Instead, Fink says, the RISC-V open-source movement is more of a win-win situation, where hardware and software both win.
Fink also said that there are farsighted development teams that are optimizing hardware and software together. “That’s why you’re seeing things like [Google’s] TPU chip,” he added. Google, which has money to burn, decided that it needed to build a proprietary AI chip specifically for machine learning. Google’s TPU version 3.0 consumes so much power that it needs water cooling.
My own take on Google’s TPU is that no commercial processor vendor is likely to stick 65,000 multipliers into a multiprocessor chip aimed at the general market. Few companies besides Google and governments have that much money to throw at a specific application problem, and not many can afford water cooling for their system designs, so I’m not convinced that Google’s TPU represents a good example of co-optimizing hardware and software in the real world where most of us live.
My eight years of experience at Tensilica, where I evangelized proprietary configurable processor cores, taught me that few engineering teams are looking to blaze a new path on the ISA landscape. That’s one more degree of freedom that they don’t have time to deal with, and most of them don’t feel the need to fiddle with ISAs. That’s not the perceived bottleneck, and not every company has the deep pockets and resources of a Google (or a WD for that matter).
Let’s get real, here. This concept of universal hardware/software optimization might sound nice, but there are some bedrock, practical, economic limitations being glossed over in the RISC-V zealotry and hype. Fortunately, that’s as religious as Fink got about RISC-V.
His keynote became more practical and realistic.
“Why RISC-V?” asked Fink. Current ISAs are decades old, he said, and “there are points in time where you need to make architectural leaps forward.” The slowing of Moore’s Law means that physics is not going to take us forward as fast as it once did. Instead, we will rely more on architectural advances coupled with software innovation.
OK, well that’s nice. I agree that the relentless journey along the Moore’s Law path has put architectural innovation on the back burner where it doesn’t belong. It’s good to look to system architecture for the next performance leaps, but the RISC-V ISA is not a revolutionary step beyond other RISC ISAs, despite the supposed decrepitude of those “decades-old” architectures.
From my perspective, the significant RISC-V contribution to our industry is the open-source nature of the ISA. Ecosystem vendors can pick up many IP and silicon vendors by joining the RISC-V team. Of course, they also pick up a huge set of vendors by jumping on the Arm Cortex teams, so I don’t see that as a significant differentiating factor for RISC-V – not from my vantage point. In addition, Wave Computing, the newest owner of poor old MIPS, recently announced that it would release the MIPS processor ISA to the open-source community through a program called MIPS Open. I certainly think you can give partial credit to the RISC-V movement for this turn of events.
Fink, a “server guy” at heart, then burst a few bubbles among the RISC-V zealots by saying:
“If your ambition is to replace a Xeon processor with a RISC-V processor in a server, just stop right there. That should not be your ambition. That should not be what you want to go after. There’s no point. You’re not going to get anywhere doing that.”
“But you can rethink the architecture of a server,” he continued. “How much main memory should there be in a server? How much storage is enabled? What kinds of silicon customizations can you make to optimize portions of the workload? Where are the hardware/software tradeoffs? That’s when you get to the magic and the power of what RISC-V can bring.”
Again, I strongly agree with Fink’s system-level optimization sentiments, but I’m failing to see the special nature of the modular RISC-V ISA that can break the bottleneck that’s been holding designers back from the flowering of processor-based ASIC design. My experiences at Tensilica and Cadence tell me that the processor ISA is not the major bottleneck. The real limiting factors are getting the development money, the engineering resources, the design and verification tools, and the development time needed to create an ASIC. Pick any processor core, make it free, and you’ll still find a ton of obstacles in the ASIC-development path. Even after attending the RISC-V Symposium, I fail to see how the open-source RISC-V bandwagon fundamentally changes the equation.
Fink continued: “Rethink the problem from the ground up. Don’t [design a system in] a certain way because that’s how it’s been done for the past 10, 20, 30, 40 years.” (We’re within two and a half years of the microprocessor’s 50th birthday, so soon you can soon add another decade to Fink’s list.) “If you’re going to design your system with a new ISA but a crusty old system architecture, your success will be limited, if you succeed at all,” he said.
Finally getting down to brass tacks, Fink enumerated his view of the RISC-V advantages:
It’s a modular ISA. The instruction set modules (integer, floating-point math, compressed instructions, vector instructions, etc.) are stable and locked in after ratification, which provides fixed targets for ecosystem developers. These locked-in instruction modules provide developers with a stable baseline. Next year’s RISC-V processor models will provide the same ISA, and any changes will be confined to new modules proposed by others or to proprietary modules that you develop yourself. That last bit is very Tensilica-like, and, as I’ve said above, the concept was not a big seller. However, I do strongly believe that ISA modularity is a “good thing” in the ASIC design sphere.
Using WD as an example, Fink reminded the audience that WD is making the SweRV RISC-V IP core available to the open-source community. However, WD intends to develop proprietary ISA modules with some number of specialized, custom instructions for specific storage applications. WD does not plan to share these specialized instructions with the open-source community. This approach allows WD to leverage the growing ecosystem for the standardized RISC-V platform while still allowing it to develop and leverage customized processors for a competitive edge.
The freedom to take this route sounds very nice in theory, but, as Fink said, WD will then need to manage the development-tool chain for these specialized processors, because they will have deviated from the standard ISA modules that the overall RISC-V ecosystem supports. From experience, I know that small development teams would love to develop specialized ASICs with architectures tailored to specific applications, but many of these companies not big enough to shoulder the burden of managing customized tool chains.
Again referencing my days as a Tensilica evangelist and channeling my inner Yogi Berra, the RISC-V bandwagon is like déjà vu all over again.
I am working on an executable instruction decode model of the RV32I ISA.
At first it seemed that it might not be possible to decode the instruction word, but I have been able to decode 40 instructions using only 17 bits out of the 32 bit instruction. Not bad considering that 6 bits could decode 64 instructions.
1) It relies heavily on immediate values in the instruction word which are in 4 different formats(apparently to save a load instruction to put a value into a register). The high order immediate bit is the sign bit that is propagated to make negative and positive values for branches, jumps, and register immediate values.
2) There is no branch if less than defined because the assembler know to swap the rs1 and rs2 field values in order to use the branch if greater which is defined.
3) Both source registers and the destination register are separate 5 bit fields for register to register instructions.
4)The 7 bit opcode has 2 bits that must be ones to identify the ISA. If these 2 bits are not both ones, then the same formats are used with 16 registers instead of 32. There is a 3 bit function field that determines the branch conditions, ALU operations, and byte /half-word/word for loads and stores, and for shift types.
An additional 7 bit field has a bit that differentiates ADD register from SUB register, left and right shifts, and 3 bits that must not be ones in the same instruction word because all ones is for the 64 bit ISA.
You are correct of course that Xtensa and ARC ISAs offer similar customisation opportunities to RISC-V.
The obvious difference is that they are proprietary and only Cadence or Synopsys can actually customise them, not some random hacker on his kitchen table with an FPGA and Yosys, and the fees for actually customising them are presumably quite steep.
The less obvious difference is that RISC-V is also attractive as-is to people who don’t want to customise it, simply because it is non-proprietary and has multiple vendors offering a rapidly growing set of microarchitectures at different PPA points, and as a result the RISC-V software, tool chain, and OS ecosystems are rapidly catching and overtaking those of Xtensa and ARC.
You are also absolutely correct that “the RISC-V ISA is not a revolutionary step beyond other RISC ISAs”. As far as the base ISA goes it follows very much the same path as MIPS followed in the 80s and Alpha in the 90s, with only minor tweaks. Those guys got it pretty right, those ISAs are very well suited to implementing system software and applications software in a large range of languages. Historically, even the people who thought you could do better for Lisp or Smalltalk or Prolog with some different ISA or instructions found that, actually, you can’t. Part of that was that the mainstream processors were moving faster than special purpose ones because of Moore’s Law, but even where instructions supposedly suited to those were provided (in e.g. SPARC and MIPS) they went unused.
Apart from cleaning up cruft such as branch delay slots, insufficiently powerful compare-and-branch instructions, lack of byte loads/stores, and large binaries because of a pure fixed-length 32 bit ISA, RISC-V has also been designed from the start to support 32, 64, and 128 bit address&integer register sizes, and a huge amount of unused opcode space for future extensions. Yes, MIPS has eventually addressed several of those issues. The new (May 2018) NanoMIPS ISA looks really good to me in most ways. But it seems to have very little support from the company with just one chip and it doesn’t appear to be part of their open-source initiative.
The big problem those ISAs have (and many others, such as IA64 right now) is not technical features of the ISA but is simply that if or when the company producing it gets bored with it or goes out of business all the customers are left in the lurch. No one else can pick up the reins and carry on and support those customers.
I’d expect that most custom extensions to RISC-V will be specialised enough that in most cases even the people who create and use them will be sufficiently well served by hand writing some library functions in assembly language (or C/C++ with inline asm for the custom instructions, possibly in macros or inline functions) and so won’t need a lot of compiler support. The custom instructions would be used only in the specialised software actually written by that organisation.
The base RISC-V ISA is good enough, as is, that there would in most cases be zero or nearly zero benefit in rebuilding the OS kernel, OS, and standard utilities to use custom instructions, so it’s much more convenient to just keep using all the standard software for standard purposes.
One exception is the RISC-V Vector extension which is being finalised right now. It causes almost zero size expansion of scalar code (unlike traditional SIMD), so once compilers catch up there will be a lot of simple loops in a lot of code that could benefit from auto-vectorisation. In the short term (by the time hardware comes out) standard dynamically-linked libraries should have versions of memcpy(), memset(), strlen(), strcpy(), strncpy() etc using the vector instructions, which will benefit most programs automatically.
“The base RISC-V ISA is good enough, as is, that there would in most cases be zero or nearly zero benefit in rebuilding the OS kernel, OS, and standard utilities to use custom instructions, so it’s much more convenient to just keep using all the standard software for standard purposes.”
Given the above, just any old RISC ISA will do as long as it runs legacy code(OS, utilities, etc.) then the “custom” instructions are essentially accelerators that probably have to be hand coded in assembly language or C/C++.
Magic will happen because it is open source and free.
Now if C/C++ is used for source, we come full circle where there is a good enough RISC with custom hardware doing the critical work(accelerator). With the same old “C to hardware”/ HLS situation.
But there is a new kid on the block that can parse C source and create an AST(abstract syntax tree) that the syntax walker can use to produce the operands and operators for the calculations along with the loops, blocks, and parenthesis.
A programmable FPGA/ASIC can then be used as the accelerator.
Yes, “just any old RISC ISA will do” to run the standard code, technically, but business model is the reason for RISC-V to exist, not technical reasons (though it does clean up and improve on its predecessors a little).
The problems with “just any old RISC ISA”are:
1) their encoding spaces are generally already full, and so there is no room to add non-trivial extensions
2) many of them support only 32 bit, or only 64 bit, and none support 128 bit (which, surprisingly, there is already some demand for, and it’s likely to be quite strong by 2040 or 2050)
3) they are proprietary, so no one else can make their own cores or chips at all, let alone extend the ISA. Even ARM, which licenses production and even microarchitecture, does not allow customers to make ISA extensions. If the company that owns the ISA goes out of business — or simply moves on to another ISA — existing customers are orphaned with no options.
“Magic happens” because someone is prepared to pay for it AND they are permitted to do it.
Some kinds of accelerators work well as an I/O device or coprocessor on the system bus (e.g. as FPGA elements), but others really need to be integrated into the execution pipeline and available with the same latency as an add or multiply.
“but others really need to be integrated into the execution pipeline and available with the same latency as an add or multiply.”
Interesting part is that is finally easy “in the grand scheme of things”, both in hardware and open source tool chain plus supporting software environments.
The harder part is the internal fight between KISS product development goals for long term support of the product line, and holding back the kids (and some mid-life-crisis adults) that see the opportunity for funded dream research.
We fought the same internal fights 30 years ago, some created amazing product advances, some created some awesomely wonderful unsupportable albatrosses.
The tough part, is only hindsight can tell the difference in the long term … without taking the risks, you will never know.
“but others really need to be integrated into the execution pipeline and available with the same latency as an add or multiply.”
So far I cannot find multiply or divide. Maybe they are in floating point, but how about fixed point?
“none support 128 bit (which, surprisingly, there is already some demand for, and it’s likely to be quite strong by 2040 or 2050)”
Is the demand for 128 bit operands or addressing? How about 48 bit? If 128 bit is needed, does 32 bit go away? How about 8 bit? Will 128 bit also support 64, 48, 32, 16?
So far I see 2 bits that must not both be ones until RV32I, then three other bits that must
not be all ones because that enables 48 bit which has another field to enable 64 bit, etc.
Addressing, of course, though addressing is difficult if you don’t also support that size as data, so of course RV128 does.
“There is only one mistake that can be made in computer design that is difficult to recover from—not having enough address bits for memory” — Bell and Strecker “Computer structures: what have we learned from the PDP11?”, 1976.
RV128 of course continues to support 8, 16, 32, and 64 bit data. This is all covered in the ISA manual.
And then you seem to confuse addressing and data size with instruction lengths, which is entirely independent. All currently-defined RISC-V instructions are 16 or 32 bits in length, with provision for (much) longer instructions later.
You will find integer multiply and divide instructions in the chapter on the “M” extension in the ISA manual. That’s the “M” in “rv32imac” etc.
“Yes, “just any old RISC ISA will do” to run the standard code, technically, but business model is the reason for RISC-V to exist, not technical reasons (though it does clean up and improve on its predecessors a little).”
Was there a big market for the predecessors?
How thoughtless of me to inject technical/practical comments into this discussion!
Yes: SPARC, MIPS, and Alpha have all sold quite well in the past.
“You will find integer multiply and divide instructions in the chapter on the “M” extension in the ISA manual. That’s the “M” in “rv32imac” etc.
is “extension” synonymous with “wish list”?
I wish the “real” RISC V would stand and be recognized.
Also I mistakenly thought that “predecessors” meant earlier members of the RISC V family, not competitors.
Karl Stevens
“extension” is synonymous with “optional feature”. For many purposes a computer with load, store, add, subtract, and, or, xor, left shift, right shift and the ability to compare two registers and branch based on the result is all you need. That is called “rv32i” or “rv64i”. Everything else is implemented as library functions which the compiler calls automatically.
The HiFive1 board I bought 2.25 years ago has a rv32imac processor, meaning it has the optional hardware instructions for multiply and divide, atomic memory operations, and compressed instruction set (16 bit opcodes duplicating the most common operations, to reduce program size).
I can assure you that is all real and not a “wish list”.
“RISC-V” is the 5th RISC instruction set designed at the University of Berkeley. SPARC was based on RISC-I and RISC-II, sharing for example the unusual register windows feature (which has later been realised to be a mistake). RISC-V assembly language is very close to compatible with MIPS assembly language — they certainly have the same flavour and mnemonics — while having a very different binary encoding. Alpha is certainly in the same family. It improved on original MIPS in some ways, and RISC-V has adopted some features from Alpha, most notably the “PALcode” concept.
So, yes, it’s fair to call those the intellectual predecessors of RISC-V.