klelatti 3 days ago

From the SiFive website [1]

> The Performance P550 scales up to four-core complex configurations while delivering 30% higher performance in less than half the area of a comparable Arm® Cortex®-A75.

Dylan Patel wasn't impressed by these comparisons with A75 [2]

> @SiFive is claiming half the area and higher perf/GHz, but they are using 7nm and 100ns memory latency. Choosing to compare to the 10nm A75 on S845, notorious for its high latency at over 200ns. Purposely ignoring iso-node or other A75 comparisons.

And this analysis seems to be borne out in this Chips and Cheese post.

> As a step along that journey, P550 feels more comparable to one of Arm’s early out-of-order designs like Cortex A57. By the time A75 came out, Arm already accumulated substantial experience in designing out-of-order CPUs. Therefore, A75 is a well polished and well rounded core, aside from obvious sacrifices required for its low power and thermal budgets. P550 by comparison is rough around the edges.

So what to make of SiFive's claims? It seems quite an important claim / comparison.

[1] https://www.sifive.com/cores/performance-p550

[2] https://x.com/dylan522p/status/1415395415000817664

  • pankajdoharey 3 days ago

    Even if i try to avoid the hyperbole by saying by the time SiFive nodes will reach maturity with the current cortex, GPT-30 will be out etc ...

    The fact is that the node difference (7nm vs. 10nm) is critical here, SiFive’s area/power efficiency gains aren’t purely architectural but partly process-driven. Even with that advantage, matching a 2018 A75 (designed for mobile thermal/power limits) in 2024 feels like catching up to ARM’s rearview mirror. ARM’s A720 today benefits from years of iterative refinement (cache hierarchies, branch predictors, memory subsystems) that aren’t easily replicated overnight.

    Scaling beyond cores is another hurdle, interconnects, memory controllers, and accelerators matter just as much as raw IPC. RISC-V’s ecosystem (tools, firmware, software optimization) also lags ARM’s, which could limit adoption even if the P550 were competitive.

    SiFive’s claims highlight RISC-V’s potential, but until they benchmark against modern cores on the same node and demonstrate system-level competitiveness (not just microarchitecture wins), the gap will persist. That said, disruption takes time—ARM wasn’t born polished either. The real test is whether SiFive can close the maturity deficit before ARM’s roadmap (and AI-driven heterogeneity) leaves them behind. I doubt it, the GAP in GPU cores alone between Cortex and M Series is so huge, and then there are accelerators like NPU cores, which SiFive havent even started working on yet.

    Even Cortex NPU's are behind Apple M Series, and if a large companies liek Samsung, Qualcomm, Mediatek lag behind Apple is Quality ARM chips with Decent GPU, NPU on Board memory, what hope does SiFive Have? At Worse Burn Investor money and die. At Best supply chips for your remote control, washing machine etc ... competing with mainstream applications would not be wise by any standards.

    • Symmetry 2 days ago

      It's also worth noting that they're claiming a high perf/GHz, not a high perf. It's easy to shrink a chip by using slower but more compact libraries that don't increase gate size as much for larger fanout at the cost of a lower maximum frequency, like AMD's compact cores do. And lowering clock speeds mean that your main memory latency, measured in clock cycles, goes down increasing perf/GHz too.

  • bhouston 3 days ago

    > As a step along that journey, P550 feels more comparable to one of Arm’s early out-of-order designs like Cortex A57.

    If it is as fast as a A57 on similar node, that would still be a major win for RISC-V which so far has been incredibly slow. The Nintendo Switch 1 uses Cortex-A57.

  • clamchowder 2 days ago

    (author here) I compared it to the A75 on the Snapdragon 670, not the 845. I chose that comparison because I have a Pixel 3a (my previous daily driver cell phone), and that's the only A75 core I had access to.

    • klelatti 2 days ago

      Hello Chester! Thanks for clarifying and for a terrific post.

  • hajile 2 days ago

    TSMC 7nm is 91MTr/mm2 and 10nm is 53MTr/mm2. That's a 1.72x increase in density while SiFive is claiming a 2x density advantage which still puts it pretty far ahead if the claim is accurate and that's without discussing the 30% IPC advantage (though final clockspeed equivalence from their claims would still put it 35% slower than the S845 at 2.8GHz). The real question is about how much more dense could A75 be if they lowered target clockspeeds.

    Dylan's complaint about comparing to the S845 is mystifying ignorance as he should know better.

    What other A75 SoCs are there? Exynos used it for their mid cores, but the SoC sucked. MediaTek had the Helio P65, but it was announced in late 2019 which was basically 2 years after S845 was announced at the end of 2017. There were some other smaller suppliers from China, but I have no idea who they are. S850 existed, but as I recall, it was just a better binning of the S845 announced months after the original.

    S845 is the ONLY A75 design worth comparing.

phire 3 days ago

> Likely, P550 doesn’t have another BTB level. If a branch misses the 32 entry BTB, the core simply calculates the branch’s destination address when it arrives at the frontend

That seems unwise. Might work well enough for direct branches, but it's going to preform very badly on indirect branches. I would love to see some tests for indirect branch performance (static and dynamic) in your suite.

> When return stack capacity is exceeded, P550 sees a sharp spike in latency. That contrasts with A75’s more gentle increase in latency.

That might be a direct consequence of the P550's limited BTB. Even when the return stack overflows, the A75 can probably still predict the return as if it was an indirect branch, utilising its massive 3072 entry L1 BTB.

Actually, are you sure the P550 even has a return stack? 16 correctly predicted call/ret pairs just so happens to be what you would get from a 32 entry BTB predicting 16 calls then 16 returns.

  • clamchowder 2 days ago

    (author here) Just a 32 entry BTB is technically a possibility from microbenchmark results, but the EIC7700X datasheet straight up says:

    "a branch prediction unit that is composed of a 32-entry Branch Target Buffer (BTB), a 9.1 KiB-entry Branch History Table (BHT), a 16-entry Return Address Stack (RAS), 512-entry Indirect Jump Target Predictor (IJTP), and a 16-entry Return Instruction Predictor"

    • phire 2 days ago

      Ah, that makes so much more sense.

      So it does have a 2nd level BTB, it's just that it's labeled as IJTP and is potentially only used by indirect branches.

      • clamchowder 2 days ago

        No, that's not a second level BTB in that regular direct branches don't seem to use it. It's only for predicting indirect branches.

  • monocasa 2 days ago

    Calls don't need to be predicted though since they're unconditional. So 16 call/ret pairs should only be 16 B2B entries.

    • phire 2 days ago

      You need to predict the target (with BTB) if want zero-bubble calls (1 cycle latency).

      Otherwise it takes 3 cycles (on the P550) to take a direct call. Doesn't matter that it's unconditional, it can't see the call until after decoding. Sure, it's only two extra cycles, but that's a full six instructions on this small 3-wide core.

      And an indirect call? Even if the target is known, it's going to need to fetch it from the register file, which requires going through rename. And unless you put a fast-path in with an extra register-file read-port, you need to go through dispatch and the scheduler too. Probably takes 6-10 cycles to take an indirect branch without a BTB.

      On bigger designs it's even more essential to predict unconditional branches, as their instruction caches take multiple cycles, and then there are quite substantial queues between fetch and decode, and between predict and fetch.

    • eigenform 2 days ago

      Modern machines usually try to predict target addresses, not just the direction of conditional branches. You can implement it for unconditional calls and jumps too, even for direct/relative-addressed ones. That's pretty common nowadays.

      When you calculate the target of a jump, you can cache it (that's what a BTB is). Next time you encounter it, you predict the target by accessing the cached value in the BTB and start fetching early instead of waiting for your jump/call to move all the way through the machine.

    • colejohnson66 2 days ago

      You still need to predict the target if it’s indirect

    • IshKebab 2 days ago

      You can predict a dynamically dispatched call surely?

turblety 3 days ago

Why are these only 1.4ghz frequency, when raspberry pi gets to 2.4ghz? Is it a limitation of the cost of scale that prevents building faster chips? Or does the architecture not really support faster chips?

I really hope that RISC-V can take over as a modern architecture, adding some competition to Intel/AMD and Arm. But they'll need to be able to offer faster chips, or at a minimum more than 4 cores.

Also does anyone know the rate of progress? I believe 10 years ago these where at 0.5mhz?

  • rbanffy 3 days ago

    Seems to be limited mostly by the power and size constraints, while it's also fabricated in an older node. The target market seems to be simple embedded devices not needing SIMD instructions, which is less constrained by the software availability. RISC-V is still a very new architecture.

    • daef 3 days ago

      i thought risc-v is >10years "old". how long is an ISA "very new"?

      • beeflet 3 days ago

        This same point is made in threads discussing how wayland protocol is 16 years "old". I think it's different if the system starts out as a research project rather than a commercial project, because the time until a usable implementation is much greater. For example, I would say that riscv is "newer" than loongISA/loongarch despite being slightly older in a literal sense.

        If you look at an arch like x86 or ARM it was designed right before chips were released, and then extended over time. The same goes for the X protocol, it simply extended previous versions.

        If you are designing something from the ground up to avoid the inherent problems of an existing system, it is reasonable to take time and research design problems to make sure you don't recreate the same issues (which would defeat the point of the redesign). It doesn't compete on the same time-frame as an extension of an existing system.

      • rbanffy 3 days ago

        x86 is more than 40. ARM is 30-ish.

        Most importantly, it's been just a few years since we could start getting reasonable RISC-V boards.

        • klelatti 3 days ago

          I think it’s a bit problematic to say ARM is 30-ish years old. The company is 34 years old but 64 bit Arm (AArch64) which is really very different to its predecessors was announced in 2011 so arguably only 14 years old.

          • K0balt 2 days ago

            I’m pretty sure the IP/experience transfer to arch64 was massive compared to starting for scratch.

            • klelatti 2 days ago

              So PowerPC is 70 years old because IBM was designing computers in 1954? :)

              • K0balt 2 days ago

                Yes, in a sense. But that is not an apt comparison at all, really.

                ISA design is a complex endeavor. Arch64 benefited from multiple factors directly attributable to ARMs prior art, domain knowledge, and market positioning.

                There is a huge distinction between a globally recognized and dominant ISA engineering firm coming out with a “new” ISA that their engineers had been prepping for for years, and the effort required to create a novel ISA and ecosystem from scratch.

                One is just another day at the office, while the other is a very riscy endeavor that requires amassing the talent, creating incentives, creating an engineering culture, and trying to create a niche in a market that was arguably fully populated by other options. And then, you still have to create an entire family of ISAs to match various specification levels.

                It’s not an apples to apples comparison.

                • klelatti 2 days ago

                  Hmmm. I didn’t make any comparison. I just pointed out that AArch64 isn’t 30 odd years old.

                  • K0balt 42 minutes ago

                    Well, that is definitely true. I guess that the quibble is with the assumption of equivalence….

                    (which you didn’t explicitly state, so I apologize for thinking it that way)

                    ….of the launching of arch64 with the launching of the RiscV ISA.

                    They were both theoretically clean slate designs, but one was made by a bunch of academics and the other by a company with decades of ISA design expertise. I’d expect the latter to be much, much more mature, all other things being equal.

                    At any rate, what riscV really did so far was to make 8bit MCUs irrelevant. I used to use a lot of 8 bit parts even with M0 around, but now with chips like the CH32v003 and family, it’s just ridiculous to even contemplate.

                    I mean you can hook up an 8pin MCU and a couple of resistors to a vga monitor and a keyboard and have a a computer in a heat shrink tube that walks circles around my ancient apple II for $0.60.

                    And if you want WiFi, BLE, and some other wireless stuff, with 3x the speed and 100x the memory, the riscV esp32 chips come in at about $1, and they can do pretty strong edge AI. It’s all just silly at this point, and it’s riscV that caused that sea change.

        • Someone 3 days ago

          and one likely reason the boards weren’t there was (https://en.wikipedia.org/wiki/RISC-V#Design):

          “As of June 2019, version 2.2 of the user-space ISA[46] and version 1.11 of the privileged ISA[3] are frozen, permitting software and hardware development to proceed. The user-space ISA, now renamed the Unprivileged ISA, was updated, ratified and frozen as version 20191213”

          So, it’s more like 5 years old, compared to ≈40 for 32-bit x86, ≈20 for 64-bit x86.

          • snvzz 2 days ago

            December 2021, for the specs required by RVA22 which the P550 implements.

            P550 was announced shortly after these specs were ratified, so it's one of the earliest RVA22 designs.

            I am expecting there'll be others we will get to see this year.

        • RobotToaster 3 days ago

          First ARM processor was 1985, so 40 years almost exactly.

      • snvzz 2 days ago

        >i thought risc-v is >10years "old".

        September 2019 for the base specs being ratified.

        But even that isn't the true starting gun for anything but basic MCU.. For high performance, it's RVA22. The relevant specs were only ratified in December 2021.

        It takes 3 years from IP to chips, and thus we are seeing the first RVA22 chips now.

        No surprises there.

      • formerly_proven 3 days ago

        Privileged spec is only a couple years old and mainline Linux only runs on RISC-V since 2022 or something like that.

    • gliptic 3 days ago

      > while it's also fabricated in an older node

      Is this 7 nm node really older than raspberry pi 5's 16 nm node?

      • nsteel 3 days ago

        > This SoC has a 1.4 GHz, quad core P550 cluster with 4 MB of shared cache. The EIC7700X is manufactured on TSMC’s 12nm FFC process

        > Next up is TSMC’s 12 nm FFC manufacturing technology, which is an optimized version of the company’s CLN16FFC that is set to use 6T libraries (as opposed to 7.5T and 9T libraries) providing a 20% area reduction. Despite noticeably higher transistor density, the CLN12FFC is expected to also offer a 10% frequency improvement at the same power and complexity or a 25% power reduction at the same clock rate and complexity.

        They optimised for density and power, not frequency. A lot of the benefit they're claiming comes just from this.

        https://www.anandtech.com/show/11337/samsung-and-tsmc-roadma...

  • brucehoult 2 days ago

    > Why are these only 1.4ghz frequency, when raspberry pi gets to 2.4ghz?

    Milk-V Megrez is shipping the same SoC running at 1.8 GHz.

    Intel's Horse Creek chip with the same cores ran at 2.4 GHz, but Intel is in trouble and cancelled non-core activities. A working board was shown at Hot Chips 23.

    Clock speed depends on the SoC integrator at least as much as the core designer, and the process node it is made on. And the packaging thermal envelope.

    > Or does the architecture not really support faster chips?

    Of course not. From as technical point of view it's essentially identical to Arm64, and with the same financial investment and comparable engineers will run at the same speed.

    The P550 is a very early RISC-V design, announced in June 2021, just a few months after the RVA22 spec it implements was published. Three to four years to go from core to SBC is normal in the industry, including for Arm's A53, A72, A76.

    SiFive has designed and shipped two or three major generations of more advanced cores since then, in the P670, P870, and P870-D.

  • azinman2 2 days ago

    Does Intel/AMD/ARM really need more competition? Do you think they’re stagnant?

    As I and others have said before, successful consolidation around RISC-V is an ultimately a gift to China. Maybe you’re for that; as an American I am not.

    • wbl 2 days ago

      How is it a gift to China? Architecture isn't where the magic is.

      • toast0 2 days ago

        It's a gift to everyone if an actually open architecture is usable.

        But it's especially a gift to sanctioned regimes, because they can more easily use this architecture for home grown chips that become desirable when mainstream chips are embargoed or under threat of embargo.

        Yes, there's still fabrication, but China and Russia have some fabrication going, just not at the latest nodes. Starting from an open standard makes it a lot easier than if they have to clone an architecture/chip or make a whole ecosystem of architecture and software.

      • le-mark 2 days ago

        Also, do Chinese companies have licenses for all the ARM cores they produce? I assumed they don’t. They traditionally don’t care about IP, so it’s a wash anyaway.

      • azinman2 2 days ago

        Because the tooling is make or break. When LLVM, Linux, rust, debuggers, Android etc etc support it you have a real chance. Having an ISA that one company doesn’t own means you can develop chips that plug in, although all the extensions of RISC-V make that a little harder.

  • beeflet 3 days ago

    I think that the development of riscv will eek out greater market share for chinese manufacturers, which will have a negative effect on the global order.

    I am also skeptical that it will lead to more open designs, but perhaps it could increase competition enough in the chip design space that more open chip designers can make a space for themselves, especially if the business of chip fabrication is isolated from design.

    • daghamm 3 days ago

      China tried to create a homegrown CPU 15-20 years ago with their MIPS variant but that died out. I think this time they are much wiser and will pull it off. In 5-8 years we may have China CPUs dominating the Asian market at least.

      Riscv leading to more open designs is wishful thinking, plus probably a large dose of PR. MIPS has been open for years and how many open source MIPS desisgn have we seen so far?

      • yjftsjthsd-h 3 days ago

        > China tried to create a homegrown CPU 15-20 years ago with their MIPS variant but that died out.

        Er, are you talking about LoongArch? The CPU line that https://en.wikipedia.org/wiki/Loongson lists new models of this year?

        • daghamm 2 days ago

          Yes, I had no idea they were still working on it.

          You could buy their products in west as a sort of a low-power PC for a short while, but I think once netbooks arrived those just vanished.

          • adgjlsfhk1 2 days ago

            Presumably 100% of supply at this point is going into chinese government/military projects where not using a western design with possible backdoors is worth a price premium.

sakras 2 days ago

I was going to buy one of these until I realized it didn't have vector extensions. I expected something with "Performance" and "Premier" in the name to have them. I think some sort of SIMD capability is table stakes for a lot of workloads these days, so I'm disappointed that there doesn't seem to be a CPU on the market that supports them. I've heard that the vector extensions being stateful makes them particularly hard to implement, which makes me wonder if there needs to be some sort of simpler-to-implement version which mirrors more traditional SIMDs like AVX2 and Neon.

  • adgjlsfhk1 2 days ago

    The SiFive P670 has vector support, and apparently dev boards using it are expected by end of year.

    • sakras 2 days ago

      Oh that's exciting, I will be on the lookout for that!

  • brucehoult 2 days ago

    > I was going to buy one of these until I realized it didn't have vector extensions

    A statement like this pretty much disqualifies your opinions. Why would you expect it to have vector extensions? No one has ever claimed it does, the P550 materials clearly specify the ISA, and the RVA22 spec it implements does not require it.

    However it is the fastest RISC-V CPU you can buy today, on a per core basis, by a factor of two, for all those many workloads that don't benefit from SIMD. For example general use as a desktop PC, compiling code etc.

    It may well be eclipsed by another factor of two by this time next year (and by something with vector), but that's the rapid pace of progress in RISC-V.

    If you need to develop non-vector code on RISC-V this year then this -- or the Mil-V Megrez for half the price and 30% high clock speed -- is the machine to have.

    If you don't need RISC-V this year then ... go buy an x86. Or an Orion O6 (which I will also be getting)

    > I think some sort of SIMD capability is table stakes for a lot of workloads these days, so I'm disappointed that there doesn't seem to be a CPU on the market that supports them

    Some workloads, yes. Not all.

    As for what is on the market, the CanMV-K230 with RVV 1.0 shipped in November 2023, and a wide variety of boards and laptops with the SpacemiT K1/M1 SoCs with octa core CPU with 256 bit RVV 1.0 shipped in the 2nd half of 2024, including the Banana Pi BPI-F3, Milk-V Jupiter, Sipeed Lichee Pi 3A, Deep Computing DC-Roma II laptop, MuseBook. Various of these have been reviewed by everyone from Jeff Geerling to Christopher Barnatt to Hackaday.

    As for "some sort of SIMD" that's been available in RISC-V since the Allwinner D1 chip shipped in mid 2021, first on the AWOL Nezha EVB, later on things such as the Lichee RV and MangoPi MQ-Pro. These days the same C906 CPU (1 GHz, 128 bit vectors) is available in other SoCs for as little as $3-$5 in the 1 GHz Milk-V Duo with 64 MB RAM (runs a full Linux kernel), up to the $9.90 Duo S with 512 MB RAM.

    There are also various boards with the 1.85 GHz quad core C910 TH1520 SoC, such as the Lichee Pi 4A and Milk-V Meles.

    Not to mention the 64 core 2.0 GHz 128 GB RAM 64 MB L3 cache 32 PCIe lanes Milk-V Pioneer.

    • sakras a day ago

      > A statement like this pretty much disqualifies your opinions

      This is needlessly aggressive. I specialize in writing SIMD code. That is my job. I am very eager to get my hands on a RVV chip so that I can play with a new SIMD ISA. So obviously a non-SIMD chip is useless to me.

      > Why would you expect it to have vector extensions?

      Because it has "Performance" in the name. I double-checked the ISA and saw that it did not have V, and was disappointed.

      > If you don't need RISC-V this year then

      Why would anybody _need_ RISC-V? RISC-V is exciting because it has the possibility of giving the end user higher performance per dollar. Until it does that, it will relegated to enthusiasts who just like playing with new ISAs.

      • brucehoult a day ago

        > This is needlessly aggressive.

        I apologise.

        > I specialize in writing SIMD code. That is my job. I am very eager to get my hands on a RVV chip

        The P550 core was announced in June 2021, 2 1/2 years ago. It was stated at the time that it doesn't have RVV.

        The X270, announced the same day, is a dual-issue in-order core that DOES have RVV.

        https://www.sifive.com/press/sifive-performance-p550-core-se...

        I've also been eagerly waiting for RVV 1.0 hardware (and programmed RVV 0.7.1 hardware in the meantime, as a very close proxy ... and the C910 has a quite high performance implementation as an OoO core with dual vector pipelines) so I follow the news, mostly on Reddit's /r/riscv.

        Boards with in-order cores implementing RVV 1.0 have been shipping since November 2023.

        Those of us who follow the news have been eagerly awaiting the SG2380 SoC and Milk-V Oasis (and other) boards with it. Sixteen 2.5 GHz OoO SiFive P670 cores with RVV, plus 8 SiFive X280 cores with RVV as an NPU.

        This was originally announced in October 2023 with predicted delivery in September 2024 (which tbh was never believable, and I expressed that at the time), then January 2025, and then late 2025 (which probably should have been the target in the first place), and now it may be cancelled because of US government sanctions.

        > Because it has "Performance" in the name.

        That comes from marketers, not engineers.

        But it is indeed the fastest RISC-V available today, per core, on real-world scalar code, by a factor of 2. (The C910 is close on code that runs from L1 cache, maybe L2).

        > Why would anybody _need_ RISC-V?

        At the moment because they want to develop software to be ready when the machines competitive with the top end of Arm and/or x86 do arrive. Or if the current performance and price level already meets their needs e.g. Samsung with their future line of TVs (prototype already demonstrated at a show) using the P470 core. Samsung has had a team porting and optimising their Tizen OS and Microsoft's CoreCLR JIT for about two years now. No doubt similar activity is happening at LG, who have also announced switching to RISC-V.

        > has the possibility of giving the end user higher performance per dollar. Until it does that

        We are already more than half way from ratification of the base ISA in mid 2019 to that date. Probably 2/3 of the way. The SoCs that will do that are already on the drawing boards. Time is short and there is much to do.

  • drmpeg 2 days ago

    The SpacemiT K1 SoC implements RVV1.0. It can be found on the Banana Pi BPI-F3 and the Milk-V Jupiter boards.

    • adgjlsfhk1 2 days ago

      Unfortunately the K1 is horribly slow. It's an in order processor, doesn't have L3 cache and has pretty slow floating point multiplies. It's an OK dev board for riscv-V, but it is closer to Raspberry pi 3 than the P550 which is a lot closer to a Pi 4 for general performance.

      • drmpeg 2 days ago

        Yes, it was very disappointing. I was hoping the 8 cores would give a speedup for compiling code, but no dice. On a large Linux build, the BPI-F3 with make -j8 takes exactly the same time as a make -j4 on the VisionFive 2.

        • brucehoult 2 days ago

          Nothing except the Pioneer (and now this and the Milk-v P550) has L3 cache.

          BPI-F3 (etc) has only 512 MB of L2 cache for each cluster of 4 cores, vs 2 MB on the VisionFive 2 (etc), and I think this is the biggest issue. Similarly the TH1520 only has 1 MB of L2.

          The Pioneer has the same 1 MB of L2 per four cores, but backs this up with 4 MB of L3, and when you're running single-threaded (or just a few) that CPU also has access to the other 60 MB of L3 cache on the chip at still much better than DRAM latency and bandwidth. It's very nice-perofrming machine if you can afford it -- though the P550 should make a better desktop machine. I'll know in a week or two when my Megrez arrives...

somanyphotons 3 days ago

I'd love to see some regular-workload benchmarks that compare equal-frequency, on the same fabrication node, same storage etc. A real apples-to-apples shootout

ge96 2 days ago

That 3D graph is great

sylware 2 days ago

Slap a good AMD gpu to that, get some rv64 recompiled AAA games, and time to tune performance from there after some QA/debugging... well for high-end desktop performance.

Then, after a little while of tuning, it will the time to access the best silicon process.

  • remexre 2 days ago

    ...Intel Core 2 -level desktop performance?

    Also, I'd imagine you'd want the Ztso extension to port PC games, assuming you mean Rosetta-style instruction translation rather than "somehow get the source and port the engine and all the middleware" -- I don't think the P550 has that extension.

    • sylware 2 days ago

      What?

      A rv64 port on a rv64 elf/linux (using the rv64 glibc) with the AMD mesa drivers. That will reveal where certainly a lot of work will have to be done, and that at all levels.

      And better do that with many AAA games (the nasty and badly coded ones, probably many of them).

      Better try to do that work before getting access to the latest silicon process.