ニュースな英語のホンヤクコンニャク

英語のニュースでお勉強。 知らない単語や表現を、こっそり調べてエラそうに説明してみるブログです。 元の記事から数日遅れることもありますので、ニュースとしては役に立たないこともある。IT系・技術系が主たる対象であります。スパム多過ぎでコメントは承認制にしましたーごめんあそばせー。

D-Waveが次世代量子アニーリングチップをリリース

D-Wave releases its next-generation quantum annealing chip

What's it take to make a chip with over a million Josephson junctions?

百万以上のジョセフソン接合でチップを作るために、何が必要なのでしょう?

f:id:takamints:20200930061143p:plain

Today, quantum computing company D-Wave is announcing the availability of its next-generation quantum annealer, a specialized processor that uses quantum effects to solve optimization and minimization problems. The hardware itself isn't much of a surprise—D-Wave was discussing its details months ago—but D-Wave talked with Ars about the challenges of building a chip with over a million individual quantum devices. And the company is coupling the hardware's release to the availability of a new software stack that functions a bit like middleware between the quantum hardware and classical computers.

本日、量子コンピュータ企業 D-Waveが、次世代量子アニーリングの可用性を発表しています。 これは量子効果によって、問題の最適化と最小化を解決することに特化したプロセッサです。 このハードウェア自身は驚きには値しません ― D-Waveは先月この詳細について議論していたため ― しかし、D-Waveは百万以上の量子デバイスによるチップ構築へのこの挑戦について Arsに語りました。 同社は、量子ハードウェアと従来のコンピュータの間でのミドルウェアのように機能する新しいソフトウェアスタックの可用性に、ハードウェアのリリースを結びつけています。


4873119197
動かして学ぶ量子コンピュータプログラミング ―シミュレータとサンプルコードで理解する基本アルゴリズム
Eric R. Johnston(著), Nic Harrigan(著), Mercedes Gimeno-Segovia(著), 丸山 耕司(監修), 北野 章(翻訳)

新品 ¥3,960 5つ星のうち4.0 1個の評価
Amazon.co.jpで詳細を見る


Quantum annealing / 量子アニーリング(量子焼きなまし法)

Quantum computers being built by companies like Google and IBM are general-purpose, gate-based machines. They can solve any problem and should show a vast acceleration for specific classes of problems—or they will, as soon as the gate count gets high enough. Right now, these quantum computers are limited to a few-dozen gates and have no error correction. Bringing them up to the scale needed presents a series of difficult technical challenges.

グーグルやIBMのような企業によって作られてきた量子コンピュータは一般目的のもので、論理回路に基づく機械です。 それらは、あらゆる問題を解くことができます。また、すぐにでも論理ゲートの数が十分に大きくなるため、特定の分野の問題について莫大な加速を見せるはずで、またそうなることでしょう。 現在、これら量子コンピュータは数十の論理ゲートに制限されており、誤り訂正機能がありません。 それらを必要なスケールまで引き上げることは、一連の難しい技術的課題を提示しています。

D-Wave's machine is not general-purpose; it's technically a quantum annealer, not a quantum computer. It performs calculations that find low-energy states for different configurations of the hardware's quantum devices. As such, it will only work if a computing problem can be translated into an energy-minimization problem in one of the chip's possible configurations. That's not as limiting as it might sound, since many forms of optimization can be translated to an energy minimization problem, including things like complicated scheduling issues and protein structures.

D-Waveのマシンは一般目的ではありません。技術的には量子アニーリングを行いますが、量子コンピュータではありません。 このハードウェアの量子デバイスの別の構成に対して、低エネルギー状態を見つける計算を行います。 このように、コンピューティングの問題は、このチップの可能な構成のうちのひとつの中におけるエネルギー最小化問題に変換できる場合にだけ動作します。 これは、想定されるほどの制限ではありません。多くの最適化の形式は、エネルギー最小化問題に変換できるためです。 それには、複雑なスケジューリングやタンパク質の構造などのようなことが含まれます。

※ 以降は原文サイトで確認してください。

>It's easiest to think of these configurations as a landscape with a series of peaks and valleys, with the problem-solving being the equivalent of searching the landscape for the lowest valley. The more quantum devices there are on D-Wave's chip, the more thoroughly it can sample the landscape. So ramping up the qubit count is absolutely critical for a quantum annealer's utility. >This idea matches D-Wave's hardware pretty well, since it's much easier to add qubits to a quantum annealer; the company's current offering has 2,000 of them. There's also a matter of fault tolerance. While errors in a gate-based quantum computer typically result in a useless output, failures on a D-Wave machine usually mean the answer it returns is low-energy, but not the lowest. And for many problems, a reasonably optimized solution can be good enough. >What has been less clear is whether the approach offers clear advantages over algorithms run on classical computers. For gate-based quantum computers, researchers had already worked out the math to show the potential for quantum supremacy. That isn't the case for quantum annealing. Over the last few years, there have been a number of cases where D-Wave's hardware showed a clear advantage over classical computers, only to see a combination of algorithm and hardware improvements on the classical side erase the difference. >### Across generations >D-Wave is hoping that the new system, which it is terming Advantage, is able to demonstrate a clear difference in performance. Prior to today, D-Wave offered a 2,000 qubit quantum optimizer. The Advantage system scales that number up to 5,000. Just as critically, those qubits are connected in additional ways. As mentioned above, problems are structured as a specific configuration of connections among the machine's qubits. If a direct connection between any two isn't available, some of the qubits have to be used to make the connection and are thus unavailable for problem solving. >The 2,000 qubit machine had a total of 6,000 possible connections among its qubits, for an average of three for each of them. The new machine ramps up that total to 35,000, an average of seven connections per qubit. Obviously, this enables far more problems to be configured without using any qubits to establish connections. A white paper shared by D-Wave indicates that it works as expected: larger problems fit in to the hardware, and fewer qubits need to be used as bridges to connect other qubits. >Each qubit on the chip is in the form of a loop of superconducting wire called a Josephson junction. But there are a lot more than 5,000 Josephson junctions on the chip. "The lion's share of those are involved in superconducting control circuitry," D-Wave's processor lead, Mark Johnson, told Ars. "They're basically like digital-analog converters with memory that we can use to program a particular problem." >To get the level of control needed, the new chip has over a million Josephson junctions in total. "Let's put that in perspective," Johnson said. "My iPhone has got a processor on it that's got billions of transistors on it. So in that sense, it's not a lot. But if you're familiar with superconducting integrated circuit technology, this is way on the outside of the curve." Connecting everything also required over 100 meters of superconducting wire—all on a chip that's roughly the size of a thumbnail. >While all of this is made using standard fabrication tools on silicon, that's just a convenient substrate—there are no semiconducting devices on the chip. Johnson wasn't able to go into details on the fabrication process, but he was willing to talk about how these chips are made more generally. >### This isn’t TSMC >One of the big differences between this process and standard chipmaking is volume. Most of D-Wave's chips are housed in its own facility and get accessed by customers over a cloud service; only a handful are purchased and installed elsewhere. That means the company doesn't need to make very many chips. >When asked how many it makes, Johnson laughed and said, "I'm going to end up as the case of this fellow who predicted there would never be more than five computers in this world," before going on to say, "I think we can satisfy our business goals with on the order of a dozen of these or less." >If the company was making standard semiconductor devices, that would mean doing one wafer and calling it a day. But D-Wave considers it progress to have reached the point where it's getting one useful device off every wafer. "We're constantly pushing way past the comfort zone of what you might have at a TSMC or an Intel, where you're looking for how many 9s can I get in my yield," Johnson told Ars. "If we have that high of a yield, we probably haven't pushed hard enough." >A lot of that pushing came in the years leading up to this new processor. Johnson told Ars that the higher levels of connectivity required a new process technology. "[It's] the first time we've made a significant change in the technology node in about 10 years," he told Ars. "Our fab cross-section is much more complicated. It's got more materials, it's got more layers, it's got more types of devices and more steps in it." >Beyond the complexity of fashioning the device itself, the fact that it operates at temperatures in the milli-Kelvin range adds to the design challenges as well. As Johnson noted, every wire that comes in to the chip from the outside world is a potential conduit for heat that has to be minimized—again, a problem that most chipmakers don't face. >### Making software easier >The new chip is being made available at the same time as a major change is coming to the software that controls it. One way to solve problems is to understand the nature of the problem and the hardware at sufficient detail to know how to set the connections on the chip so that the results it returns answer the problem. But that's pretty highly specialized knowledge, and it's outside the sort of expertise most companies have on hand. So D-Wave is attempting to make it easier by providing an intervening software step that gets rid of some of the complexity. >Under the new system, users will have to understand how to convert their problem into something called a "quadratic unconstrained binary optimization," or QUBO. But if they can do that, they can hand the QUBO to something D-Wave is calling its "hybrid problem solver," which will do everything needed to get it to execute on the quantum annealer. >This is part of a general trend toward what have been termed "hybrid solutions" for quantum computing, a trend that's taking place on both the gate-based and annealing platforms. Researchers have acknowledged that the parts of an algorithm that actually perform best on quantum systems are often only a part of a larger computer science problem, and the other parts may perform just fine—or even better—on classical computer hardware. So the full solution to a problem will require a mix of classical and quantum calculations. As is the case here, this can involve using the classical side to figure out how best to program the quantum side. >For D-Wave systems, the possibilities are even more complex. As mentioned above, one of the challenges of exploring energy minimization landscapes on a quantum annealer is figuring out how to fit enough of the landscape into a limited number of qubits. And there are a lot of ways to potentially tackle that issue. Some problems can be divided up into smaller chunks that are then run separately. In other cases, it's possible to examine the QUBO and find ways of optimizing it so that it fits into the available hardware better. >Other solutions involve doing some calculations on each side of the quantum divide. It's possible to do a sparse sampling of the landscape on classical hardware and then get the quantum annealer to focus on those areas that seem to look promising. Alternately, you could use the quantum annealer to sparsely sample and then use the classical computer to exhaustively explore the areas around any low-energy solutions it returns. >New users can worry about all these potential ways of handling their problems if they want to, but they can now simply turn the issue over to the hybrid solver and let it do the worrying for them instead. And D-Wave is hoping that this will vastly expand its potential user base. "There's a lot less work to be done if you don't have to take them all the way down to the machine language and become experts in all the parameter tuning," D-Wave VP of Software Murray Thom told Ars. "Offsetting that to a hybrid solver means that businesses can focus on formulating their problems, getting their preproduction tests done, and solving them at scale." >### But is it faster? >The obvious question left after all of this is whether the new hardware and software is ultimately faster than a purely classical solution. But that's a more complicated question than it initially seems. D-Wave is almost certainly going to be able to identify cases where its hardware outperforms classical algorithms as they now stand. But if the past is any guide, that will motivate computer scientists to give those algorithms a careful look—and possibly find ways of optimizing them further. Performance claims are more of a conversation among experts than they are in the supercomputing space, where there are widely accepted benchmarks. >Perhaps more important is the issue of whether any businesses can find specific cases where the quantum annealing delivers them useful solutions faster than existing algorithms. And that may not require D-Wave's machine to return answers faster in every case than classical algorithms, since businesses may only need to solve problems under a specific set of circumstances. D-Wave's ability to return solutions that may not be the most optimal could provide an advantage, since "really good" may be just as useful for businesses as "the best." >D-Wave is pretty confident that this generation, or possibly the next, will be the point where there's a clear advantage to using its hardware. But evaluating that claim will mean waiting for both users and computer scientists to spend more time on it.

元記事は以下からどうぞ。

arstechnica.com