Ciro Santilli OurBigBook.com $£ Sponsor €¥ 中国独裁统治 China Dictatorship 新疆改造中心、六四事件、法轮功、郝海东、709大抓捕、2015巴拿马文件 邓家贵、低端人口、西藏骚乱
computer-hardware.bigb
= Computer hardware
{wiki}

= Moore's law
{c}
{parent=Computer hardware}
{wiki}

Born: 1965

Died: 2010+-ish

= Semiconductor device fabrication
{parent=Computer hardware}
{wiki}

= Semiconductor physical implementation
{synonym}

https://en.wikipedia.org/wiki/Semiconductor_device

This is the lowest level of abstraction computer, at which the basic gates and power are described.

At this level, you are basically thinking about the 3D layered structure of a chip, and how to make machines that will allow you to create better, usually smaller, gates.

= Semiconductor research institute
{parent=Semiconductor device fabrication}
{tag=Research institute}

= IMEC
{c}
{parent=Semiconductor research institute}
{title2=1984-}
{title2=Belgium}
{wiki}

\Video[https://www.youtube.com/watch?v=RO7E7RX0L2Y]
{title=imec: The Semiconductor Watering Hole by <Asianometry> (2022)}
{description=A key thing they do is have a small prototype fab that brings in-development equipment from different vendors together to make sure the are working well together. Cool.}

= Computer research institute
{parent=Semiconductor research institute}

= Xerox PARC
{c}
{parent=Computer research institute}
{wiki=PARC_(company)}

What a legendary place.

= Semiconductor equipment maker
{parent=Semiconductor device fabrication}

= Company that makes semiconductor production equipment
{synonym}

As mentioned at https://youtu.be/16BzIG0lrEs?t=397 from <video Applied Materials by Asianometry (2021)}>, originally the companies <fabs> would make their own equipment. But eventually things got so complicated that it became worth it for separate companies to focus on equipment, which then then sell to the fabs.

= ASML Holding
{c}
{parent=Semiconductor equipment maker}
{title2=1984-}
{wiki}

= ASML
{c}
{synonym}

As of 2020 leading makers of the most important <fab> <photolithography> equipment.

\Video[https://www.youtube.com/watch?v=CFsn1CUyXWs]
{title=ASML: TSMC's Critical Supplier by <Asianometry> (2021)}

\Video[https://www.youtube.com/watch?v=SB8qIO6Ti_M]
{title=How ASML Won Lithography by <Asianometry> (2021)}
{description=
First there were dominant Elmer and Geophysics Corporation of America dominating the market.

Then a Japanese government project managed to make Nikon and Canon Inc. catch up, and in 1989, when <Ciro Santilli> was born, they had 70% of the market.

https://youtu.be/SB8qIO6Ti_M?t=240 In 1995, ASML had reached 25% market share. Then it managed the folloging faster than the others:
* TwinScan, reached 50% market share in 2002
* Immersion litography
* EUV. There was a big split between EUV vs particle beams, and ASML bet on EUV and EUV won.
* https://youtu.be/SB8qIO6Ti_M?t=459 they have an insane number of <software engineers> working on software for the machine, which is insanely complex. They are big on <UML>.
* https://youtu.be/SB8qIO6Ti_M?t=634 they use <ZEISS> optics, don't develop their own. More precisely, the majority owned subsidiary <Carl Zeiss SMT>.
* https://youtu.be/SB8qIO6Ti_M?t=703 <IMEC> collaborations worked well. Notably the <ASML>/<Philips>/<ZEISS> trinity
}

* https://www.youtube.com/watch?v=XLNsYecX_2Q ASML: Chip making goes vacuum with EUV (2009) Self promotional video, some good shots of their buildings.

= ASM International
{c}
{parent=ASML Holding}
{title2=1964}
{wiki}

Parent/predecessor of <ASML>.

= Applied Materials
{c}
{parent=Semiconductor equipment maker}
{title2=1967-}
{wiki}

\Video[https://www.youtube.com/watch?v=16BzIG0lrEs]
{title=<Applied Materials> by <Asianometry> (2021)}
{description=They are <chemical vapor deposition> fanatics basically.}

= Power, performance and area
{parent=Semiconductor device fabrication}
{title2=PPA}

https://en.wikichip.org/wiki/power-performance-area

This is the mantra of the <semiconductor industry>:
* power and area are the main limiting factors of chips, i.e., your budget:
  * chip area is ultra expensive because there are sporadic errors in the fabrication process, and each error in any part of the chip can potentially break the entire chip. Although there are 

    The percentage of working chips is called the yield.

    In some cases however, e.g. if the error only affects single CPU of a multi-core CPU, then they actually deactivate the broken CPU after testing, and sell the worse CPU cheaper with a clear branding of that: this is called binning https://www.tomshardware.com/uk/reviews/glossary-binning-definition,5892.html
  * power is a major semiconductor limit as of 2010's and onwards. If everything turns on at once, the chip would burn. Designs have to account for that.
* performance is the goal.

  Conceptually, this is basically a set of algorithms that you want your hardware to solve, each one with a respective weight of importance.

  Serial performance is fundamentally limited by the <critical path>[longest path] that electrons have to travel in a given clock cycle.

  The way to work around it is to create pipelines, splitting up single operations into multiple smaller operations, and storing intermediate results in memories.

= Wafer
{disambiguate=electronics}
{parent=Semiconductor device fabrication}

= Wafer
{synonym}

= Czochralski method
{c}
{parent=Wafer (electronics)}
{wiki}

= Semiconductor fabrication plant
{parent=Semiconductor device fabrication}
{title2=foundry}
{wiki}

= Fab
{synonym}
{title2}

They put a lot of expensive equipment together, much of it <company that makes semiconductor production equipment>[made by other companies], and they make the entire chip for companies ordering them.

= Company with a semiconductor fabrication plant
{parent=Semiconductor fabrication plant}

A list of <semiconductor fabrication plant>[fabs] can be seen at: https://en.wikipedia.org/wiki/List_of_semiconductor_fabrication_plants and basically summarizes all the companies that have fabs.

= Fairchild Semiconductor
{c}
{parent=Company with a semiconductor fabrication plant}
{wiki}

= Fairchild
{c}
{synonym}

Some nice insights at: <Robert Noyce: The Man Behind the Microchip by Leslie Berlin (2006)>.

= GlobalFoundries
{c}
{parent=Company with a semiconductor fabrication plant}
{title2=2009}
{title2=AMD spinout}
{wiki}

<AMD> just gave up this risky part of the business amidst the <fabless> boom. Sound like a wise move. They then fell more and more away from the state of the art, and moved into more niche areas.

= Infineon Technologies
{parent=Company with a semiconductor fabrication plant}
{tag=Siemens spinoff}
{title2=1999}
{wiki}

= Infineon
{c}
{synonym}

= SMIC
{c}
{parent=Company with a semiconductor fabrication plant}
{tag=Chinese semiconductor industry}
{title2=Chinese TSMC}
{wiki}

\Video[https://www.youtube.com/watch?v=aL_kzMlqgt4]
{title=SMIC, Explained by <Asianometry> (2021)}

= TSMC
{c}
{parent=Company with a semiconductor fabrication plant}
{wiki}

One of the companies that has fabs, which buys machines from companies such as ASML and puts them together in so called "silicon fabs" to make the chips

As the quintessential <fabless> <fab>, there is on thing TSMC can never ever do: sell their own design! It must forever remain a <fab>-only company, that will never compete with its customers. This is highlighted e.g. at https://youtu.be/TRZqE6H-dww?t=936 from <video How Nvidia Won Graphics Cards by Asianometry (2021)>.

\Video[https://www.youtube.com/watch?v=9fVrWDdll0g]
{title=How <Taiwan> Created TSMC by <Asianometry> (2020)}
{description=Some points:
* UCM failed because it focused too much on the internal market, and was shielded from external competition, so it didn't become world leading
* one of TSMC's great advances was the <fabless> business model approach.
* they managed to do large technology transfers from the West to kickstart things off
* one of their main victories was investing early in <CMOS>, before it became huge, and winning that market
}

= Semiconductor fabrication step
{parent=Semiconductor fabrication plant}
{wiki}

= Chemical vapor deposition
{parent=Semiconductor fabrication step}
{wiki}

= Photolithography
{parent=Semiconductor fabrication step}
{wiki}

= Extreme ultraviolet lithography
{parent=Photolithography}
{wiki}

= EUV
{c}
{synonym}
{title2}

= Photomask
{parent=Photolithography}
{wiki}

= Standard cell library
{parent=Semiconductor device fabrication}
{wiki}

Basically what <register transfer level> compiles to in order to achieve a real chip implementation.

After this is done, the final step is <place and route>.

They can be designed by third parties besides the <semiconductor fabrication plants>. E.g. <Arm Ltd.> markets its <Arm Artisan>[Artisan] Standard Cell Libraries as mentioned e.g. at: https://web.archive.org/web/20211007050341/https://developer.arm.com/ip-products/physical-ip/logic This came from a 2004 acquisition: https://www.eetimes.com/arm-to-acquire-artisan-components-for-913-million/[], <if a product of a big company has a catchy name it came from an acquisition>[obviously].

The standard cell library is typically composed of a bunch of versions of somewhat simple gates, e.g.:
* AND with 2 inputs
* AND with 3 inputs
* AND with 4 inputs
* OR with 2 inputs
* OR with 3 inputs
and so on.

Each of those gates has to be designed by hand as a <3D> structure that can be produced in a given <fab>.

Simulations are then carried out, and the electric properties of those structures are characterized in a standard way as a bunch of tables of numbers that specify things like:
* how long it takes for electrons to pass through
* how much heat it produces
Those are then used in <power, performance and area> estimates.

= Open source standard cell library
{parent=Standard cell library}

Open source ones:
* https://www.quora.com/Are-there-good-open-source-standard-cell-libraries-to-learn-IC-synthesis-with-EDA-tools/answer/Ciro-Santilli Are there good open source standard cell libraries to learn IC synthesis with EDA tools?

= Electronic design automation
{parent=Semiconductor device fabrication}
{title2=EDA}
{wiki}

= EDA tool
{c}
{synonym}

A set of software programs that <compile> high level <register transfer level> languages such as <Verilog> into something that a <fab> can actually produce. One is reminded of a <compiler toolchain> but on a lower level.

The most important steps of that include:
* <logic synthesis>: mapping the <Verilog> to a <standard cell library>
* <place and route>: mapping the synthesis output into the 2D surface of the chip

= Electronic design automation phase
{parent=Electronic design automation}

= Logic synthesis
{parent=Electronic design automation phase}
{wiki}

Step of <electronic design automation> that maps the <register transfer level> input (e.g. <Verilog>) to a <standard cell library>.

The output of this step is another <Verilog> file, but one that exclusively uses interlinked cell library components.

= Place and route
{parent=Electronic design automation phase}
{wiki}

Given a bunch of interlinked <standard cell library> elements from the <logic synthesis> step, actually decide where exactly they are going to go on 2D (stacked 2D) <integrated circuit> surface.

Sample output format of place and route would be <GDSII>.

= Integrated circuit layout
{parent=Place and route}
{wiki}

= GDSII
{c}
{parent=Integrated circuit layout}
{wiki}

\Image[https://upload.wikimedia.org/wikipedia/commons/a/aa/Silicon_chip_3d.png]
{title=3D rendering of a GDSII file.}

= EDA company
{c}
{parent=Electronic design automation}
{tag=Technology company}

= EDA vendor
{c}
{synonym}

The main ones as of 2020 are:
* <Mentor Graphics>, which was bought by <Siemens> in 2017
* <Cadence Design Systems>
* <Synopsys>

= Cadence Design Systems
{c}
{parent=EDA company}
{wiki}

= Mentor Graphics
{c}
{parent=EDA company}
{wiki}

= Synopsys
{c}
{parent=EDA company}
{wiki}

= Open source EDA tool
{parent=Electronic design automation}

= qflow
{parent=Open source EDA tool}

Cool looking <open source EDA tool>[open source EDA toolchain]:
* http://opencircuitdesign.com/qflow/
* https://github.com/RTimothyEdwards/qflow

They apparently even produced a real working small <RISC-V> chip with the flow, not bad.

= Semiconductor process node
{parent=Semiconductor device fabrication}

= Semiconductor device fabrication bibilography
{parent=Semiconductor device fabrication}

= Asianometry
{c}
{parent=Semiconductor device fabrication bibilography}
{tag=The best technology YouTube channels}

https://www.youtube.com/channel/UC1LpsuAUaKoMzzJSEt5WImw

Very good channel to learn some basics of <semiconductor device fabrication>!

Focuses mostly on the <semiconductor industry>.

https://youtu.be/aL_kzMlqgt4?t=661 from <video SMIC, Explained by Asianometry (2021)> from mentions he is of Chinese ascent, ancestors from Ningbo. Earlier in the same video he mentions he worked on some startups. He doesn't appear to speak perfect Mandarin Chinese anymore though based on pronounciation of Chinese names.

https://asianometry.substack.com/ gives an abbreviated name "Jon Y".

\Video[https://www.youtube.com/watch?v=X9Zm3K05Utk]
{title=Reflecting on Asianometry in 2022 by <Asianometry> (2022)}
{description=Mentions his insane work schedule: 4 hours research in the morning, then day job, then editing and uploading until midnight. Appears to be based in <Taipei>. Two videos a week. So even at the current 400k subs, he still can't make a living.}

= Integrated circuit
{parent=Computer hardware}
{title2=IC}
{wiki}

It is quite amazing to read through books such as <The Supermen: The Story of Seymour Cray by Charles J. Murray (1997)>, as it makes you notice that earlier <CPUs> (all before the 70's) were not made with <integrated circuits>, but rather smaller pieces glued up on <PCBs>! E.g. the <arithmetic logic unit> was actually a discrete component at one point.

The reason for this can also be understood quite clearly by reading books such as <Robert Noyce: The Man Behind the Microchip by Leslie Berlin (2006)>. The first <integrated circuits> were just too small for this. It was initially unimaginable that a CPU would fit in a single chip! Even just having a very small number of components on a chip was already revolutionary and enough to kick-start the industry. Just imagine how much money any level of integration saved in those early days for production, e.g. as opposed to manually soldering <point-to-point constructions>. Also the reliability, size an weight gains were amazing. In particular for military and spacial applications originally.

\Video[https://www.youtube.com/watch?v=z47Gv2cdFtA]
{title=A briefing on semiconductors by <Fairchild Semiconductor> (1967)}
{description=
Uploaded by the <Computer History Museum>. <There is value in tutorials written by early pioneers of the field>, this is pure <gold>.

Shows:
* <photomasks>
* <silicon> <ingots> and <wafer (electronics)> processing
}

= Application-specific integrated circuit
{parent=Integrated circuit}
{wiki}

= ASIC
{c}
{synonym}
{title2}

= Hardware acceleration
{synonym}
{title2}

= Hardware accelerator
{synonym}

= System on a chip
{parent=Integrated circuit}
{wiki}

= SoC
{c}
{synonym}
{title2}

= Register transfer level
{parent=Computer hardware}
{title2=RTL}
{wiki}

Register transfer level is the abstraction level at which computer chips are mostly designed.

The only two truly relevant RTL languages as of 2020 are: <Verilog> and <VHDL>. Everything else compiles to those, because that's all that <EDA vendors> support.

Much like a <C (language)> compiler abstracts away the <CPU> assembly to:
* increase portability across ISAs
* do optimizations that programmers can't feasibly do without going crazy

Compilers for RTL languages such as Verilog and <VHDL> abstract away the details of the specific <semiconductor physical implementation>[semiconductor technology] used for those exact same reasons.

The compilers essentially compile the RTL languages into a <standard cell library>.

Examples of companies that work at this level include:
* <Intel>. Intel also has <semiconductor fabrication plants> however.
* <Arm Company> which does not have <fabs>, and is therefore called a "<fabless>" company.

= High-level synthesis
{parent=Register transfer level}
{wiki}

= Fabless manufacturing
{parent=Register transfer level}
{wiki}

= Fabless
{synonym}

In the past, most computer designers would have their own <fabs>.

But once designs started getting very complicated, it started to make sense to separate concerns between designers and <fabs>.

What this means is that design companies would primarily write <register transfer level>, then use <electronic design automation> tools to get a final manufacturable chip, and then send that to the <fab>.

It is in this point of time that <TSMC> came along, and benefied and helped establish this trend.

The term "Fabless" could in theory refer to other areas of industry besides the <semiconductor industry>, but it is mostly used in that context.

= Fabless semiconductor company
{parent=Fabless manufacturing}

= Logic gate
{parent=Register transfer level}
{wiki}

= Truth table
{parent=Logic gate}
{wiki}

= Verilog
{c}
{parent=Register transfer level}
{wiki}

Examples under \a[verilog], more details at <Verilator>.

= Value change dump
{parent=Verilog}
{title2=VCD}
{wiki}

= Verilator
{c}
{parent=Verilog}
{wiki}

<Verilog> simulator that <transpiles> to <C++>.

One very good thing about this is that it makes it easy to create test cases directly in C++. You just supply inputs and clock the simulation directly in a C++ loop, then read outputs and assert them with `assert()`. And you can inspect variables by printing them or with GDB. This is infinitely more convenient than doing these IO-type tasks in <Verilog> itself.

Some simulation examples under \a[verilog].

First install <Verilator>. On <Ubuntu>:
``
sudo apt install verilator
``
Tested on Verilator 4.038, <Ubuntu 22.04>.

Run all examples, which have assertions in them:
``
cd verilator
make run
``

File structure is for example:
* \a[verilog/counter.v]: <Verilog> file
* \a[verilog/counter.cpp]: <C++> loop which clocks the design and runs tests with assertions on the outputs
* \a[verilog/counter.params]: <GCC> compilation flags for this example
* \a[verilog/counter_tb.v]: <Verilog> version of the <C++> test. Not used by Verilator. Verilator can't actually run out `_tb` files, because they do in Verilog IO things that we do better from <C++> in Verilator, so Verilator didn't bother implementing them. This is a good thing.

Example list:
* \a[verilog/negator.v], \a[verilog/negator.cpp]: the simplest non-identity combinatorial circuit!
* \a[verilog/counter.v], \a[verilog/counter.cpp]: sequential hello world. Synchronous active high reset with active high enable signal. Adapted from: http://www.asic-world.com/verilog/first1.html
* \a[verilog/subleq.v], \a[verilog/subleq.cpp]: subleq <one instruction set computer> with separated instruction and data RAMs

= Verilator interactive example
{c}
{parent=Verilator}

The example under \a[verilog/interactive] showcases how to create a simple interactive visual <verilog> example using <verilator> and <sdl>.

\Image[https://raw.githubusercontent.com/cirosantilli/media/master/verilog-interactive.gif]

You could e.g. expand such an example to create a simple (or complex) <video game> for example if you were insane enough. But please don't waste your time doing that, <backward design>[Ciro Santilli begs you].

The example is also described at: https://stackoverflow.com/questions/38108243/is-it-possible-to-do-interactive-user-input-and-output-simulation-in-vhdl-or-ver/38174654#38174654

Usage: install dependencies:
``
sudo apt install libsdl2-dev verilator
``
then run as either:
``
make run RUN=and2
make run RUN=move
``
Tested on Verilator 4.038, Ubuntu 22.04.

File overview:
* and2
  * \a[verilog/interactive/and2.cpp]
  * \a[verilog/interactive/and2.v]
* move
  * \a[verilog/interactive/move.cpp]
  * \a[verilog/interactive/move.v]
* \a[verilog/interactive/display.cpp]

In those examples, the more interesting application specific logic is delegated to Verilog (e.g.: move game character on map), while boring timing and display matters can be handled by SDL and C++.

= VHDL
{c}
{parent=Register transfer level}
{wiki}

Examples under \a[vhdl], more details at: <GHDL>.

= GHDL
{c}
{parent=VHDL}

https://github.com/ghdl/ghdl

Examples under \a[vhdl].

First install <GHDL>. On <Ubuntu>:
``
sudo apt install verilator
``
Tested on Verilator 1.0.0, <Ubuntu 22.04>.

Run all examples, which have assertions in them:
``
cd vhdl
./run
``

Files:
* Examples
  * Basic
    * \a[vhdl/hello_world_tb.vhdl]: hello world
    * \a[vhdl/min_tb.vhdl]: min
    * \a[vhdl/assert_tb.vhdl]: assert
  * Lexer
    * \a[vhdl/comments_tb.vhdl]: comments
    * \a[vhdl/case_insensitive_tb.vhdl]: case insensitive
    * \a[vhdl/whitespace_tb.vhdl]: whitespace
    * \a[vhdl/literals_tb.vhdl]: literals
  * Flow control
    * \a[vhdl/procedure_tb.vhdl]: procedure
    * \a[vhdl/function_tb.vhdl]: function
  * \a[vhdl/operators_tb.vhdl]: operators
  * Types
    * \a[vhdl/integer_types_tb.vhdl]: integer types
    * \a[vhdl/array_tb.vhdl]: array
    * \a[vhdl/record_tb.vhdl.bak]: record. TODO fails with "GHDL Bug occurred" on GHDL 1.0.0
    * \a[vhdl/generic_tb.vhdl]: generic
  * \a[vhdl/package_test_tb.vhdl]: Packages
    * \a[vhdl/standard_package_tb.vhdl]: standard package
    * textio
        * \a[vhdl/write_tb.vhdl]: write
        * \a[vhdl/read_tb.vhdl]: read
    * \a[vhdl/std_logic_tb.vhdl]: std_logic
  * \a[vhdl/stop_delta_tb.vhdl]: `--stop-delta`
* Applications
  * Combinatoric
    * \a[vhdl/adder.vhdl]: adder
    * \a[vhdl/sqrt8_tb.vhdl]: sqrt8
  * Sequential
    * \a[vhdl/clock_tb.vhdl]: clock
    * \a[vhdl/counter.vhdl]: counter
* Helpers
    * \a[vhdl/template_tb.vhdl]: template

= Processor
{disambiguate=computing}
{parent=Computer hardware}

= Microarchitecture
{parent=Processor (computing)}
{tag=Computer architecture}
{wiki}

= Central processing unit
{parent=Processor (computing)}
{wiki}

= CPU
{c}
{synonym}
{title2}

= CPUs
{c}
{synonym}

= Arithmetic logic unit
{parent=Central processing unit}
{wiki}

= Microcontroller
{parent=Central processing unit}
{wiki}

As of 2020's, it is basically a cheap/slow/simple CPU used in <embedded system> applications.

= MicroPython
{parent=Microcontroller}
{tag=Python (programming language)}
{wiki}

It is interpreted. It actually implements a Python (-like ?) interpreter that can run on a microcontroller. See e.g.: <Compile MicroPython code for Micro Bit locally>.

As a result, it is both very convenient, as it does not require a C toolchain to build for, but also very slow and produces larger images.

= Microcontroller vs CPU
{parent=Microcontroller}
{wiki}

* https://electronics.stackexchange.com/questions/1092/whats-the-difference-between-a-microcontroller-and-a-microprocessor
* https://electronics.stackexchange.com/questions/227796/why-are-relatively-simpler-devices-such-as-microcontrollers-so-much-slower-than

= CPU architecture
{c}
{parent=Central processing unit}
{tag=Microarchitecture}
{wiki}

= Instruction pipelining
{parent=CPU architecture}

The first thing you must understand is the <Classic RISC pipeline> with a concrete example.

= JavaScript CPU microarchitecture simulator
{c}
{parent=Instruction pipelining}
{tag=JavaScript library}

= JavaScript CPU simulator
{synonym}

= y86.js.org
{c}
{parent=JavaScript CPU microarchitecture simulator}
{tag=Y86}

* https://y86.js.org/
* https://github.com/shuding/y86

The good:
* slick <UI>! But very hard to read characters, they're way too small.
* attempts to show state diffs with a flash. But it goes by too fast, would be better if it were more permanent
* <Reverse debugging>

The bad:
* educational <ISA>
* unclear what flags mean from UI, no explanation on hover. Likely the authors assume knowledge of the <Y86> book.

= WebRISC-V
{c}
{parent=JavaScript CPU microarchitecture simulator}
{tag=RISC-V}

https://webriscv.dii.unisi.it/

The good:
* <Reverse debugging>
* circuit diagram

The bad:
* Clunky <UI>
* circuit diagram doesn't show any state??

= Hazard
{disambiguate=computer architecture}
{parent=Instruction pipelining}
{wiki}

= Pipeline stall
{parent=Hazard (computer architecture)}
{wiki}

= Classic RISC pipeline
{parent=Instruction pipelining}
{wiki}

= Microprocessor
{parent=Central processing unit}
{wiki}

Basically a synonym for <central processing unit> nowadays: https://electronics.stackexchange.com/questions/44740/whats-the-difference-between-a-microprocessor-and-a-cpu

= Field-programmable gate array
{parent=Processor (computing)}
{wiki}

= FPGA
{c}
{synonym}
{title2}

= FPGAs
{c}
{synonym}

It basically replaces a bunch of discrete <digital> components with a single chip. So you don't have to wire things manually.

Particularly fundamental if you would be putting those chips up a thousand cell towers for signal processing, and ever felt the need to reprogram them! Resoldering would be fun, would it? So you just do a over the wire update of everything.

Vs a <microcontroller>: same reason why you would want to use discrete components: speed. Especially when you want to do a bunch of things in parallel fast.

One limitation is that it only handles digital electronics: https://electronics.stackexchange.com/questions/25525/are-there-any-analog-fpgas There are some analog analogs, but they are much more restricted due to signal loss, which is exactly what digital electronics is very good at mitigating.

\Video[https://www.youtube.com/watch?v=gl4CuzOH6I4]
{title=First FPGA experiences with a Digilent Cora Z7 Xilinx Zynq by <Marco Reps> (2018)}
{description=Good video, actually gives some rationale of a use case that a <microcontroller> wouldn't handle because it is not fast enough.}

\Video[https://www.youtube.com/watch?v=0zrqYy369NQ]
{title=FPGA Dev Board Tutorial by Ben Heck (2016)}

\Video[https://www.youtube.com/watch?v=m-8G1Yixb34]
{title=The History of the FPGA by <Asianometry> (2022)}

= FPGA company
{c}
{parent=Field-programmable gate array}
{tag=Semiconductor company}

= Xilinx
{c}
{parent=FPGA company}
{title2=1984-2022}
{wiki}

= Graphics processing unit
{parent=Processor (computing)}
{wiki}

= GPU
{c}
{synonym}
{title2}

= GPUs
{c}
{synonym}

= General-purpose computing on graphics processing units
{parent=Graphics processing unit}
{wiki}

= GPGPU
{c}
{synonym}
{title2}

= Open source GPU compute benchmark
{c}
{parent=General-purpose computing on graphics processing units}

* https://github.com/ekondis/mixbench <GPL>
* https://github.com/ProjectPhysX/OpenCL-Benchmark custom non-commercial, non-military license

= GPU compute library
{c}
{parent=General-purpose computing on graphics processing units}
{wiki}

= CUDA
{c}
{parent=GPU compute library}
{wiki}

= CUDA hello world
{c}
{parent=CUDA}

Example: https://github.com/cirosantilli/cpp-cheat/blob/d18a11865ac105507d036f8f12a457ad9686a664/cuda/inc.cu

= OpenCL
{c}
{parent=GPU compute library}
{wiki}

= ROCm
{c}
{parent=GPU compute library}
{wiki}

Official hello world: https://github.com/ROCm/HIP-Examples/blob/ff8123937c8851d86b1edfbad9f032462c48aa05/HIP-Examples-Applications/HelloWorld/HelloWorld.cpp

= ROCm on Ubuntu
{c}
{parent=ROCm}

Tested on <Ubuntu 23.10> with <Ciro Santilli's hardware/p14s>:
``
sudo apt install hipcc
git clone https://github.com/ROCm/HIP-Examples
cd HIP-Examples/HIP-Examples-Applications/HelloWorld
make
``
TODO fails with:
``
/bin/hipcc -g   -c -o HelloWorld.o HelloWorld.cpp
clang: error: cannot find ROCm device library for gfx1103; provide its path via '--rocm-path' or '--rocm-device-lib-path', or pass '-nogpulib' to build without ROCm device library
make: *** [<builtin>: HelloWorld.o] Error 1
``

Generic Ubuntu install bibliograpy:
* https://askubuntu.com/questions/1429376/how-can-i-install-amd-rocm-5-on-ubuntu-22-04
* https://www.reddit.com/r/ROCm/comments/1438p6t/how_to_install_rocm_opencl_on_ubuntu_2304_rx580/

= AI accelerator
{c}
{parent=Processor (computing)}
{wiki}

\Video[https://www.youtube.com/watch?v=L0948yq2Hqk]
{title=The Coming AI Chip Boom by <Asianometry> (2022)}

= Amazon AI accelerator silicon
{c}
{parent=AI accelerator}
{tag=Amazon custom silicon}

* 2020: Traininum in 2020, e.g. https://techcrunch.com/2020/12/01/aws-launches-trainium-its-new-custom-ml-training-chip/
* 2018: AWS Inferentia, mentioned at https://en.wikipedia.org/wiki/Annapurna_Labs

= Tensor Processing Unit
{c}
{parent=AI accelerator}
{tag=Google custom hardware}
{title2=TPU}
{title2=2015}
{title2=Google AI accelerator}
{wiki}

= Tesla Dojo
{c}
{parent=AI accelerator}
{title2=2022}

= Computer form factor
{parent=Computer hardware}
{wiki}

= Embedded system
{parent=Computer form factor}
{wiki}

= Distributed computing
{parent=Computer form factor}
{wiki}

= Fog computing
{parent=Distributed computing}
{wiki}

Our definition of fog computing: a system that uses the computational resources of individuals who volunteer their own devices, in which you give each of the volunteers part of a computational problem that you want to solve.

<Folding@home> and <SETI@home> are perfect example of that definition.

= Charity Engine
{c}
{parent=Fog computing}
{wiki}

= Folding@home
{c}
{parent=Fog computing}
{wiki}

= SETI@home
{c}
{parent=Fog computing}
{wiki}

= Is fog computing more efficient than cloud computing?
{parent=Fog computing}
{wiki}

Advantages of fog: there is only one, reusing hardware that would be otherwise idle.

Disadvantages:
* in cloud, you can put your datacenter on the location with the cheapest possible power. On fog you can't.
* on fog there is some waste due to network communication.
* you will likely optimize code less well because you might be targeting a wide array of different types of hardware, so more power (and time) wastage. Furthermore, some of the hardware used will not not be optimal for the task, e.g. <CPU> instead of <GPU>.

All of this makes <Ciro Santilli> doubtful if it wouldn't be more efficient for volunteers simply to donate money rather than inefficient power usage.

Bibliography:
* https://greenfoldingathome.com/2018/05/28/is-foldinghome-a-waste-of-electricity/[]: useless article, does not compare to centralize, asks if folding the proteins is worth the power usage...

= Mainframe computer
{parent=Computer form factor}
{wiki}

= Cloud computing
{parent=Computer form factor}
{wiki}

= Cloud computing market share
{parent=Cloud computing}

\Image[https://web.archive.org/web/20220826031044im_/https://cdn.statcdn.com/Infographic/images/normal/18819.jpeg]
{title=Cloud Computing market share in Q2 2022 by statista.com}
{source=https://www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-providers/}

= Hyperscale computing
{parent=Cloud computing}
{wiki}

= Hyperscaler
{synonym}

Basically means "company with huge server farms, and which usually rents them out like <Amazon AWS> or <Google Cloud Platform>

\Image[https://web.archive.org/web/20220803073556im_/https://energyinnovation.org/wp-content/uploads/2020/03/Estimated-global-data-electricity-use-by-data-center-type.png]
{title=Global electricity use by data center type: 2010 vs 2018}
{description=The growth of <hyperscaler> cloud vs smaller cloud and private deployments was incredible in that period!}
{source=https://energyinnovation.org/2020/03/17/how-much-energy-do-data-centers-really-use/}

= Cloud computing platform
{parent=Cloud computing}

= Amazon Web Services
{c}
{parent=Cloud computing platform}
{tag=Amazon product}
{wiki}

= AWS
{c}
{synonym}

= Amazon AWS
{c}
{synonym}

= aws-cli
{c}
{parent=Amazon Web Services}
{wiki}

= AWS service
{parent=Amazon Web Services}

= Amazon Athena
{c}
{parent=AWS service}
{wiki}

<Google BigQuery> alternative.

= Amazon Redshift
{c}
{parent=AWS service}
{wiki}

= Amazon S3
{c}
{parent=AWS service}
{wiki}

= Browse S3 bucket on web browser
{parent=Amazon S3}

They can't even make this basic stuff just work!
* https://stackoverflow.com/questions/16784052/access-files-stored-on-amazon-s3-through-web-browser

= Amazon Elastic Compute Cloud
{c}
{parent=AWS service}
{tag=Platform as a service}
{wiki}

= AWS Elastic Compute
{c}
{synonym}

= Amazon EC2
{synonym}
{title2}

= Amazon EC2 HOWTO
{c}
{parent=Amazon Elastic Compute Cloud}

= Amazon EC2 hello world
{c}
{parent=Amazon EC2 HOWTO}
{tag=Hello world}

Let's get <SSH> access, instal a package, and run a server.

As of December 2023 on a `t2.micro` instance, the only one part of free tier at the time with advertised 1 vCPU, 1 GiB RAM, 8 GiB disk for the first 12 months, on <Ubuntu 22.04>:
``
$ free -h
               total        used        free      shared  buff/cache   available
Mem:           949Mi       149Mi       210Mi       0.0Ki       590Mi       641Mi
Swap:             0B          0B          0B
$ nproc
1
$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       7.6G  1.8G  5.8G  24% /
``

To install software:
``
sudo apt update
sudo apt install cowsay
cowsay asdf
``

Once HTTP inbound traffic is enabled on security rules for port 80, you can:
``
while true; do printf "HTTP/1.1 200 OK\r\n\r\n`date`: hello from AWS" | sudo nc -Nl 80; done
``
and then you are able to `curl` from your local computer and get the response.

= Amazon EC2 GPU
{c}
{parent=Amazon EC2 HOWTO}

As of December 2023, the cheapest instance with an <Nvidia GPU> is <g4nd.xlarge>, so let's try that out. In that instance, <lspci> contains:
``
00:1e.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
``
TODO meaning of "nd"? "n" presumably means <Nvidia>, but what is the "d"?

Be careful not to confuse it with <g4ad.xlarge>, which has an <AMD GPU> instead. TODO meaning of "ad"? "a" presumably means <AMD>, but what is the "d"?

Some documentation on which GPU is in each instance can seen at: https://docs.aws.amazon.com/dlami/latest/devguide/gpu.html (https://web.archive.org/web/20231126224245/https://docs.aws.amazon.com/dlami/latest/devguide/gpu.html[archive]) with a list of which GPUs they have at that random point in time. Can the GPU ever change for a given instance name? Likely not. Also as of December 2023 the list is already outdated, e.g. P5 is now shown, though it is mentioned at: https://aws.amazon.com/ec2/instance-types/p5/

When selecting the instance to launch, the GPU does not show anywhere apparently on the instance information page, it is so bad!

Also note that this instance has 4 vCPUs, so on a new account you must first make a customer support request to Amazon to increase your limit from the default of 0 to 4, see also: https://stackoverflow.com/questions/68347900/you-have-requested-more-vcpu-capacity-than-your-current-vcpu-limit-of-0[], otherwise instance launch will fail with:
\Q[You have requested more vCPU capacity than your current vCPU limit of 0 allows for the instance bucket that the specified instance type belongs to. Please visit http://aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit.]

When starting up the instance, also select:
* image: <Ubuntu 22.04>
* storage size: 30 GB (maximum free tier allowance)
Once you finally managed to <SSH> into the instance, first we have to install drivers and reboot:
``
sudo apt update
sudo apt install nvidia-driver-510 nvidia-utils-510 nvidia-cuda-toolkit
sudo reboot
``
and now running:
``
nvidia-smi
``
shows something like:
``
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05   Driver Version: 525.147.05   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:1E.0 Off |                    0 |
| N/A   25C    P8    12W /  70W |      2MiB / 15360MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
``

If we start from the raw <Ubuntu 22.04>, first we have to install drivers:
* https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-nvidia-driver.html official docs
* https://stackoverflow.com/questions/63689325/how-to-activate-the-use-of-a-gpu-on-aws-ec2-instance
* https://askubuntu.com/questions/1109662/how-do-i-install-cuda-on-an-ec2-ubuntu-18-04-instance
* https://askubuntu.com/questions/1397934/how-to-install-nvidia-cuda-driver-on-aws-ec2-instance

From basically everything should just work as normal. E.g. we were able to run a <CUDA hello world> just fine along:
``
nvcc inc.cu
./a.out
``

One issue with this setup, besides the time it takes to setup, is that you might also have to pay some network charges as it downloads a bunch of stuff into the instance. We should try out some of the pre-built images. But it is also good to know this pristine setup just in case.

Some stuff we then managed to run:
``
curl https://ollama.ai/install.sh | sh
/bin/time ollama run llama2 'What is quantum field theory?'
``
which gave:
``
0.07user 0.05system 0:16.91elapsed 0%CPU (0avgtext+0avgdata 16896maxresident)k
0inputs+0outputs (0major+1960minor)pagefaults 0swaps
``
so way faster than on my local desktop <CPU>, hurray.

After setup from: https://askubuntu.com/a/1309774/52975 we were able to run:
``
head -n1000 pap.txt | ARGOS_DEVICE_TYPE=cuda time argos-translate --from-lang en --to-lang fr > pap-fr.txt
``
which gave:
``
77.95user 2.87system 0:39.93elapsed 202%CPU (0avgtext+0avgdata 4345988maxresident)k
0inputs+88outputs (0major+910748minor)pagefaults 0swaps
``
so only marginally better than on <Ciro Santilli's hardware/p14s>. It would be fun to see how much faster we could make things on a more powerful GPU.

= Amazon Machine Image
{c}
{parent=Amazon Elastic Compute Cloud}

= AMI
{c}
{synonym}
{title2}

= List of AWS AMIs
{parent=Amazon Machine Image}

= AWS Deep Learning Base GPU AMI (Ubuntu 20.04)
{c}
{parent=List of AWS AMIs}
{tag=Ubuntu 20.04}

These come with pre-installed drivers, so e.g. <nvidia-smi> just works on them out of the box, tested on <g5.xlarge> which has an <Nvidia A10G> GPU. Good choice as a starting point for <deep learning> experiments.

= Amazon Elastic Block Store
{c}
{parent=Amazon Elastic Compute Cloud}

= Laucnh Amazin EC2 with existing EBS volume
{parent=Amazon Elastic Block Store}

Not possible directly without first creating an AMI image from snapshot? So annoying!
* https://serverfault.com/questions/639537/booting-an-ec2-instance-from-an-existing-ebs-volume
* https://stackoverflow.com/questions/71847637/aws-ec2-how-to-use-pre-existing-ebs-volume-as-main-bootable-disk

= Amazon EBS
{c}
{synonym}
{title2}

The hot and more expensive sotorage for <Amazon EC2>, where e.g. your <Ubuntu> filesystem will lie.

The cheaper and slower alternative is to use <Amazon S3>.

= EC2 instance store volume
{c}
{parent=Amazon Elastic Compute Cloud}

Large but ephemeral storage for EC2 instances. Predetermined by the <EC2 instance type>. Stays in the local server disk. Not automatically mounted.
* https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html (https://web.archive.org/web/20231214213241/https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html[archive]) notably highlights what it persists, which is basically nothing
* https://serverfault.com/questions/433703/how-to-use-instance-store-volumes-storage-in-amazon-ec2 mentions that you have to mount it

= vCPU
{c}
{parent=Amazon Elastic Compute Cloud}

= EC2 instance type
{c}
{parent=Amazon Elastic Compute Cloud}

= g4ad.xlarge
{c}
{parent=EC2 instance type}

<AMD GPU>[AMD GPUs] as mentioned at: https://aws.amazon.com/ec2/instance-types/g4/

= g4nd.xlarge
{c}
{parent=EC2 instance type}

<NVIDIA T4> GPUs as mentioned at: https://aws.amazon.com/ec2/instance-types/g4/

= g5.xlarge
{c}
{parent=EC2 instance type}

<Nvidia A10G> GPU, 4 <vCPU>[vCPUs].

= Alibaba Cloud
{c}
{parent=Cloud computing platform}
{tag=Alibaba product}

= Google Cloud Platform
{c}
{parent=Cloud computing platform}
{tag=Google product}
{title2=GCP}
{wiki}

= Type of cloud computing
{parent=Cloud computing}

= Infrastructure as a service
{parent=Type of cloud computing}
{wiki}

= IaaS
{synonym}
{title2}

You <SSH> into a an OS like <Ubuntu> and do whatever you want from there. E.g. <Amazon EC2>.

The OS is usually virualized, and you get only a certain share of the CPU by default.

= Platform as a service
{parent=Type of cloud computing}
{wiki}

= PaaS
{synonym}
{title2}

Highly managed, you don't even see the <Docker (software)> images, only some higher level <JSON> configuration file.

These setups are really convenient and cheap, and form a decent way to try out a new website with simple requirements.

= AWS Elastic Beanstalk
{c}
{parent=Platform as a service}
{tag=Amazon Web Services}
{wiki}

= Heroku
{c}
{parent=Platform as a service}
{wiki}

This feels good.

One problem though is that Heroku is very opinionated, a likely like other PaaSes. So if you are trying something that is slightly off the mos common use case, you might be fucked.

Another problem with Heroku is that it is extremely difficult to debug a build that is broken on Heroku but not locally. We needed a way to be able to drop into a shell in the middle of build in case of failure. Otherwise it is impossible.

Deployment:
``
git push heroku HEAD:master
``

View <stdout> logs:
``
heroku logs --tail
``

<PostgreSQL> database, it seems to be delegated to <AWS>. How to browse database: https://stackoverflow.com/questions/20410873/how-can-i-browse-my-heroku-database
``
heroku pg:psql
``

Drop and recreate database:
``
heroku pg:reset --confirm <app-name>
``
All tables are destroyed.

Restart app:
``
heroku restart
``

= Send free emails from Heroku
{parent=Heroku}

Arghh, why so hard... tested 2021:
* <Sendgrid>: this one is the first one I got working on free tier!
* Mailgun: the Heroku add-on creates a free plan. This is smaller than the flex plan and does not allow custom domains, and is not available when signing up on mailgun.com directly: https://help.mailgun.com/hc/en-us/articles/203068914-What-Are-the-Differences-Between-the-Free-and-Flex-Plans- And without custom domains you cannot send emails to anyone, only to people in the 5 manually whitelisted list, thus making this worthless. Also, gmail is not able to verify the DNS of the sandbox emails, and they go to spam.

  Mailgun does feel good otherwise if you are willing to pay. Their Heroku integration feels great, exposes everything you need on environment variables straight away.
* CloudMailin: does not feel as well developed as Mailgun. More focus on receiving. Tried adding TXT xxx._domainkey.ourbigbook.com and CNAME mta.ourbigbook.com entires with custom domain to see if it works, took forever to find that page... https://www.cloudmailin.com/outbound/domains/xxx Domain verification requires a bit of human contact via email.

  They also don't document their Heroku usage well. The envvars generated on Heroku are useless, only to login on their web UI. The send username and password must be obtained on their confusing web ui.

= High performance computing
{parent=Computer form factor}
{wiki}

= Job scheduler
{parent=High performance computing}
{wiki}

= IBM Spectrum LSF
{c}
{parent=Job scheduler}
{title2=LSF}

= LSF get version
{c}
{parent=IBM Spectrum LSF}

Most/all commands have the `-V` option which prints the version, e.g.:
``
bsub -V
``

= LSF command
{c}
{parent=IBM Spectrum LSF}

= bsub
{c}
{parent=LSF command}

Submit a new job. The most important command!

Docs: https://www.ibm.com/docs/en/spectrum-lsf/10.1.0?topic=bsub-options

= bsub get job stdout and stderr
{parent=bsub}

By default, LSF only sends you an email with the stdout and stderr included in it, and does not show or store anything locally.

One option to store things locally is to use:
``
bsub -oo stdout.log -eo stderr.log 'echo myout; echo myerr 1>&2'
``
as documented at:
* https://www.ibm.com/docs/en/spectrum-lsf/10.1.0?topic=options-eo
* https://www.ibm.com/docs/en/spectrum-lsf/10.1.0?topic=options-oo
Or to use files with the job id in them:
``
bsub -oo %J.out -eo %J.err 'echo myout; echo myerr 1>&2'
``

By default `bsub -oo`:
* also contains the LSF metadata in addition to the actual submitted process stdout
* prevents the completion email from being sent
To get just the stdout to the file, use `bsub -N -oo` which:
* stores only stdout on the file
* re-enables the completion email
as mentioned at:
* https://www.ibm.com/support/pages/include-only-job-stdout-lsf-job-output-file
* https://www.ibm.com/docs/en/spectrum-lsf/10.1.0?topic=o-n

Another option is to run with the <bsub `-I` option>:
```
bsub -I 'echo a;sleep 1;echo b;sleep 1;echo c'
```
This immediately prints stdout and stderr to the terminal.

= bsub on foreground
{parent=bsub}

Run `bsub` on foreground, show stdout on host stdout live with an interactive with the <bsub `-I` option>:
```
bsub -I 'echo a;sleep 1;echo b;sleep 1;echo c'; echo done
```
Ctrl + C kills the job on remote as well as locally.

Bibliography:
* https://superuser.com/questions/46312/wait-for-one-or-all-lsf-jobs-to-complete

= bsub option
{parent=bsub}

= bsub `-I` option
{parent=bsub}

https://www.ibm.com/docs/en/spectrum-lsf/10.1.0?topic=options-i

= bpeek
{c}
{parent=LSF command}

View stdout/stderr of a running job.

Documented at: https://www.ibm.com/docs/en/spectrum-lsf/10.1.0?topic=reference-bpeek

Documented at:
* https://www.bsc.es/support/LSF/9.1.2/lsf_command_ref/index.htm?bpeek.1.html~main

= bkill
{c}
{parent=LSF command}

Kill jobs.

Documented at: https://www.ibm.com/docs/en/spectrum-lsf/10.1.0?topic=reference-bkill

= bkill all jobs
{c}
{parent=LSF command}

By the current user:
``
bkill 0
``

= Slurm Workload Manager
{c}
{parent=High performance computing}
{wiki}

= SLURM
{c}
{synonym}
{title2}

= Supercomputer
{parent=High performance computing}
{wiki}

Some good insights on the earlier history of the industry at: <The Supermen: The Story of Seymour Cray by Charles J. Murray (1997)>.

= Exascale computing
{parent=Supercomputer}
{wiki}

The scale where human <brain simulation> becomes possible according to some estimates.

First publicly reached by <Frontier (supercomputer)>.

= TOP500
{c}
{parent=Supercomputer}
{wiki}

= Supercomputer by owner
{parent=Supercomputer}

= Oak Ridge supercomputer
{c}
{parent=Supercomputer by owner}
{tag=Oak Ridge National Laboratory}

= Frontier
{c}
{disambiguate=supercomputer}
{parent=Oak Ridge supercomputer}
{wiki}

= Intel supercomputer market share
{c}
{parent=Supercomputer}
{tag=Intel}

\Image[https://web.archive.org/web/20210908201649im_/https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2020/06/top500-june-2020-chip-technology.jpg]
{title=<Intel> <supercomputer> market share from 1993 to 2020}
{description=This graph is shocking, they just took over the entire market! Some good pre-Intel context at <The Supermen: The Story of Seymour Cray by Charles J. Murray (1997)>, e.g. in those earlier days, custom architectures like <Cray>'s and many others dominated.}
{source=https://www.nextplatform.com/2020/06/22/arm-and-japan-get-their-day-in-the-hpc-sun/}

= Personal computer
{parent=Computer form factor}
{wiki}

= Laptop
{parent=Personal computer}
{wiki}

= Desktop computer
{parent=Personal computer}
{wiki}

= Desktop
{synonym}

= Mobile phone
{parent=Personal computer}
{wiki}

= Cell phone
{synonym}

= Smartphone
{parent=Mobile phone}
{wiki}

= Mobile app
{parent=Mobile phone}
{wiki}

= App
{synonym}

= Workstation
{parent=Computer form factor}
{wiki}

= Computer data storage
{parent=Computer hardware}
{wiki}

= Storage
{synonym}

= Filesystem
{parent=Computer data storage}
{wiki}

= Computer file
{parent=Filesystem}
{wiki}

= File signature
{parent=Computer file}
{wiki}

= Tape drive
{parent=Computer data storage}
{title2=1950s-}
{wiki}

One of the most enduring forms of storage! Started in the 1950s, but still used in the 2020s as the cheapest (and slowest access) archival method. Robot arms are needed to load and read them nowadays.

\Video[https://www.youtube.com/watch?v=sYgnCWOVysY]
{title=Web camera mounted insite an IBM TS4500 tape library by lkaptoor (2020)}
{description=Footage dated 2018.}

= Volatile memory
{parent=Computer data storage}
{wiki}

= Random-access memory
{parent=Volatile memory}
{wiki}

= RAM
{c}
{synonym}
{title2}

In conventional speech of the early 2000's, is basically a synonym for <dynamic random-access memory>.

= Static random-access memory
{parent=Random-access memory}
{wiki}

= SRAM
{c}
{synonym}
{title2}

= Dynamic random-access memory
{parent=Random-access memory}
{wiki}

= DRAM
{c}
{synonym}
{title2}

DRAM is often shortened to just <random-access memory>.

= Synchronous dynamic random-access memory
{parent=Dynamic random-access memory}
{title2=SDRAM}
{wiki}

= DDR SDRAM
{parent=Synchronous dynamic random-access memory}
{title2=DDR SDRAM}
{wiki}

= Magnetoresistive RAM
{parent=Random-access memory}
{tag=Non-volatile memory}
{title2=MRAM}
{wiki}

= Non-volatile memory
{parent=Computer data storage}

The opposite of <volatile memory>.

= Disk storage
{parent=Non-volatile memory}
{wiki}

= Disk read-and-write head
{parent=Disk storage}
{wiki}

= Magnetoresistive disk head
{parent=Disk read-and-write head}
{{wiki=Disk_read-and-write_head#Magnetoresistive_heads_(MR_heads)}}

= Solid-state storage
{parent=Non-volatile memory}
{wiki}

= SSD
{synonym}
{title2}

= Erase SSD securely
{parent=Solid-state storage}

You can't just <shred (UNIX)> individual <sSD> files because SSD writes only at large granularities, so hardware/drivers have to copy stuff around all the time to compact it. This means that leftover copies are left around everywhere.

What you can do however is to erase the entire thing with vendor support, which most hardware has support for. On hardware encrypted disks, you can even just erase the keys:
* ATA: https://www.thomas-krenn.com/en/wiki/Perform_a_SSD_Secure_Erase for ATA.
* NVMe: http://forum.notebookreview.com/threads/secure-erase-hdds-ssds-sata-nvme-using-hdparm-nvme-cli-on-linux.827525/

TODO does shredding the

= Solid-state drive
{parent=Computer data storage}
{title2=SSD}
{wiki}

= Flash memory
{parent=Solid-state drive}
{wiki}

\Video[https://www.youtube.com/watch?v=5f2xOxRGKqk]
{title=The Engineering Puzzle of Storing Trillions of Bits in your <Smartphone> / SSD using Quantum Mechanics by Branch Education (2020)}
{description=Nice animations show how <quantum tunnelling> is used to set bits in <flash memory>.}

= I/O device
{parent=Computer hardware}
{wiki}

= Punched card
{parent=I O device}
{tag=Display device}
{wiki}

= Punch card
{synonym}

= Punchcard
{synonym}

Served as both input, output and <storage> system in the eary days!

\Video[https://www.youtube.com/watch?v=YnnGbcM-H8c]
{title=1964 IBM 029 Keypunch Card Punching Demonstration by CuriousMarc (2014)}

\Video[https://www.youtube.com/watch?v=L7jAOcc9kBU]
{title=Using Punch Cards by Bubbles Whiting (2016)}
{description=Interview at the <The Centre for Computing History>.}

\Video[https://www.youtube.com/watch?v=BlUWg2nxCz0]
{title=Once Upon A Punched Card by <IBM> (1964)}
{description=Goes on and on a bit too long. But cool still.}

= Hollerith tabulating machine
{c}
{parent=Punched card}

\Video[https://www.youtube.com/watch?v=YBnBAzrWeF0]
{title=The 1890 US Census and the history of punchcard computing by Stand-up Maths (2020)}
{description=It was basically a counting machine! Shows a reconstruction at the <Computer History Museum>.}

= Display device
{parent=I O device}
{wiki}

= Blinkenlights
{c}
{parent=Display device}
{wiki}

= E Ink
{c}
{parent=Display device}
{wiki}

Electronic Ink such as that found on Amazon Kindle is the greatest invention ever made by man.

Once E Ink reaches reasonable refresh rates to replace liquid crystal displays, the world will finally be saved.

It would allow <Ciro Santilli> to spend his entire life in front of a screen rather in the real world without getting tired eyes, and even if it is sunny outside.

Ciro stopped reading non-code non-news a while back though, so the current refresh rates are useless, what a shame.

OMG, this is amazing: https://getfreewrite.com/

= Amazon Kindle
{c}
{parent=E Ink}
{wiki}

<PDF> table of contents feature requests: https://twitter.com/cirosantilli/status/1459844683925008385

= Remarkable
{disambiguate=tablet}
{c}
{parent=E Ink}
{wiki}

<Remarkable 2> is really, really good. Relatively fast refresh + touchscreen is amazing.

No official public feedback forum unfortunately:
* https://twitter.com/cirosantilli/status/1459844683925008385
* https://www.reddit.com/r/RemarkableTablet/comments/7h341m/official_remarkable_feedback_ideas_and/
* https://www.reddit.com/r/RemarkableTablet/comments/7hxu70/link_for_remarkable_support_and_feature_requests/

<PDF> table of contents could be better: https://twitter.com/cirosantilli/status/1459844683925008385

= Remarkable 2
{c}
{parent=Remarkable (tablet)}

Display size: 10.3 inches. Perfect size

= Computer input device
{parent=I O device}

= Teleprinter
{parent=Computer hardware}
{wiki}

= Teletype
{synonym}

Way, way before <instant messaging>, there was... teletype!

\Video[https://www.youtube.com/watch?v=2XLZ4Z8LpEE]
{title=Using a 1930 Teletype as a Linux Terminal by <CuriousMarc> (2020)}

= Instruction set architecture
{parent=Computer hardware}
{wiki}

= ISA
{c}
{synonym}
{title2}

The main interface between the <central processing unit> and <software>.

= Assembly language
{parent=Instruction set architecture}
{wiki}

= Assembly
{synonym}

A human readable way to write instructions for an <instruction set architecture>.

One of the topics covered in <Ciro Santilli>'s <Linux Kernel Module Cheat>.

= Assembler
{disambiguate=computing}
{parent=Assembly language}

= GNU Assembler
{c}
{parent=Assembler (computing)}
{tag=gcc}
{wiki}

= GNU GAS
{c}
{synonym}
{title2}

= Calling convention
{parent=Instruction set architecture}
{wiki}

= List of instruction set architectures
{parent=Instruction set architecture}

List of <instruction set architecture>.

= One instruction set computer
{parent=List of instruction set architectures}
{title2=OISC}
{wiki}

https://stackoverflow.com/questions/3711443/minimal-instruction-set-to-solve-any-problem-with-a-computer-program/38523869#38523869

= ARM architecture family
{c}
{parent=List of instruction set architectures}
{tag=Arm (company)}

= ARM instruction set
{c}
{synonym}

= ARM
{c}
{disambiguate=ISA}
{synonym}

This <ISA> basically completely dominated the <smartphone> market of the 2010s and beyond, but it started appearing in other areas as the end of <Moore's law> made it more economical logical for large companies to start developing their own semiconductor, e.g. <Google custom silicon>, <Amazon custom silicon>.

It is exciting to see ARM entering the <server>, <desktop> and <supercomputer> market circa 2020, beyond its dominant mobile position and roots.

<Ciro Santilli> likes <Ciro Santilli's self perceived creative personality>[to see the underdogs rise], and bite off dominant ones.

The excitement also applies to <RISC-V> possibly over ARM mobile market one day conversely however.

Basically, as long as were a huge company seeking to develop a <CPU> and able to control your own ecosystem independently of <Windows>' desktop domination (held by the need for backward compatibility with a billion end user programs), ARM would be a possibility on your mind.

* in 2020, the Fugaku supercomputer, which uses an ARM-based <Fujitsu> designed chip, because the number 1 fastest supercomputer in <TOP500>: https://www.top500.org/lists/top500/2021/11/

  It was later beaten by another <x86> supercomputer https://www.top500.org/lists/top500/2022/06/[], but the message was clearly heard.
* 2012 https://hackaday.com/2012/07/09/pedal-powered-32-core-arm-linux-server/ pedal-powered 32-core Arm Linux server. A <publicity stunt>, but still, cool.
* <AWS Graviton>

= PowerPC
{c}
{parent=List of instruction set architectures}
{wiki}

= RISC-V
{c}
{parent=List of instruction set architectures}
{wiki}

The leading no-royalties options as of 2020.

<China> has been a major <RISC-V> potential user in the late 2010s, since the country is trying to increase its <semiconductor industry> independence, especially given economic sanctions imposed by the <USA>.

E.g. a result of this, the <RISC-V Foundation> moved its legal headquarters to <Switzerland> in 2019 to try and overcome some of the sanctions.

= RISC-V International
{c}
{parent=RISC-V}

= RISC-V Foundation
{c}
{synonym}
{title2}

= SiFive
{c}
{parent=RISC-V}
{wiki}

Leading <RISC-V> consultants as of 2020, they are basically trying to become the <Red Hat> of the <semiconductor industry>.

= RISC-V timer
{parent=RISC-V}
{tag=QEMU}

= riscv/timer.S
{parent=RISC-V timer}
{file}

TODO: the interrupt is firing only once:
* https://www.reddit.com/r/RISCV/comments/ov4vhh/timer_interrupt/

Adapted from: https://danielmangum.com/posts/risc-v-bytes-timer-interrupts/

Tested on <Ubuntu 23.10>:
``
sudo apt install binutils-riscv64-unknown-elf qemu-system-misc gdb-multiarch
cd riscv
make
``
Then on shell 1:
``
qemu-system-riscv64 -machine virt -cpu rv64 -smp 1 -s -S -nographic -bios none -kernel timer.elf
``
and on shell 2:
``
gdb-multiarch timer.elf -nh -ex "target remote :1234" -ex 'display /i $pc' -ex 'break *mtrap' -ex 'display *0x2004000' -ex 'display *0x200BFF8'
``
<GDB> should break infinitel many times on `mtrap` as interrupts happen.

= RISC-V priviledged ISA
{parent=RISC-V}

= RISC-V MSTATUS register
{parent=RISC-V priviledged ISA}

= RISC-V MSTATUS.MIE field
{parent=RISC-V MSTATUS register}

= x86
{c}
{parent=List of instruction set architectures}
{wiki}

\Include[x86-paging]

= x86 custom instructions
{c}
{parent=x86}

<Intel> is known to have created customized chips for very large clients.

This is mentioned e.g. at: https://www.theregister.com/2021/03/23/google_to_build_server_socs/
\Q[Intel is known to do custom-ish cuts of Xeons for big customers.]
Those chips are then used only in large scale server deployments of those very large clients. <Google> is one of them most likely, given their penchant for <Google custom hardware>.

TODO better sources.

= Y86
{c}
{parent=List of instruction set architectures}

https://esolangs.org/wiki/Y86 mentions:
\Q[Y86 is a toy RISC CPU instruction set for education purpose.]

One specification at: http://web.cse.ohio-state.edu/~reeves.92/CSE2421sp13/PracticeProblemsY86.pdf

= Computer manufacturer
{parent=Computer hardware}

This section is about companies that integrate parts and software from various other companies to make up fully working computer systems.

= Dell
{c}
{parent=Computer manufacturer}
{wiki}

= Lenovo
{c}
{parent=Computer manufacturer}
{wiki}

Their websites a bit <shitty>, clearly a non cohesive amalgamation of several different groups.

E.g. you have to create several separate accounts, and different regions have completely different accounts and websites.

The <Europe> replacement part website for example is clearly made by a third party called https://flex.com/ and has Flex written all over it, and the header of the home page has a slightly broken but very obviously broken CSS. And you can't create an account without a VAT number... and they confirmed by email that they don't sell to non-corporate entities without a VAT number. What a <bullshit>!

= ThinkPad
{c}
{parent=Lenovo}
{wiki}

This is <Ciro Santilli>'s favorite laptop brand. He's been on it since the early 2010's after he saw his <Ciro Santilli's wife>[then-girlfriend-later-wife] using it.

Ciro doesn't know how to explain it, but ThinkPads just feel... right. The screen, the keyboard, the lid, the touchpad are all exactly what Ciro likes.

The only problem with ThinkPad is that it is owned by <Lenovo> which is a <Ciro Santilli's campaign for freedom of speech in China>[Chinese company], and that makes Ciro feel bad. But he likes it too much to quit... what to do?

Ciro is also reassured to see that in every enterprise he's been so far as of 2020, ThinkPads are very dominant. And the same when you see internal videos from other big tech enterprises, all those nerds are running... Ubuntu on ThinkPads! And the https://en.wikipedia.org/wiki/File:ISS-38_EVA-1_Laptops.jpg[ISS].

Those nerds like their ThinkPads so much, that Ciro has seen some acquaintances with crazy old ThinkPad machines, missing keyboard buttons or the like. They just like their machines that much.

ThinkPads are are also designed for repairability, and it is easy to buy replacement parts, and there are OEM part replacement video tutorials: https://www.youtube.com/watch?v=vseFzFFz8lY No visible <planned obsolescence> here! With the caveat that the official online part stores can be <shit> as mentioned at <Lenovo>{full}.

Further more, in 2020 Lenovo is announced full certification for <Ubuntu> https://www.forbes.com/sites/jasonevangelho/2020/06/03/lenovos-massive-ubuntu-and-red-hat-announcement-levels-up-linux-in-2020/#28a8fd397ae0 which \i[fantastic] news!

The only thing Ciro never understood is the trackpoint: https://superuser.com/questions/225059/how-to-get-used-of-trackpoint-on-a-thinkpad Why would you use that with such an amazing touchpad? And <vimium>.

= ThinkPad series
{parent=ThinkPad}

https://www.reddit.com/r/thinkpad/comments/crw08i/series_differences_t_vs_x_vs_p_vs_e_vs_etc/

= Raspberry Pi Foundation
{c}
{parent=Computer manufacturer}
{wiki}

= Raspberry Pi Foundation project
{c}
{parent=Raspberry Pi Foundation}
{wiki}

= Raspberry Pi OS
{c}
{parent=Raspberry Pi Foundation project}
{wiki}

Change password without access:
* https://raspberrypi.stackexchange.com/questions/24770/change-reset-password-without-monitor-keyboard

Enable SSH on boot:
* `sudo touch /boot/ssh`

= Raspberry Pi
{c}
{parent=Raspberry Pi Foundation project}
{tag=Devboard}
{title2=2012}
{wiki}

= Raspberry Pi 1
{c}
{parent=Raspberry Pi}

= Raspberry Pi 2
{c}
{parent=Raspberry Pi}

Model B V 1.1.

SoC: BMC2836

https://www.raspberrypi.org/products/raspberry-pi-2-model-b/

= Raspberry Pi 3
{c}
{parent=Raspberry Pi}

Model B V 1.2.

SoC: BCM2837

Serial from `cat /proc/cpuinfo`: 00000000c77ddb77

= Raspberry Pi Pico
{c}
{parent=Raspberry Pi}
{tag=Microcontroller}
{title2=2021}

Some key specs:
* <SoC>:
  * name: RP2040. Custom designed by <Raspberry Pi Foundation>, likely the first they make themselves rather than using a <Broadcom> chip. But the design still is closed source, likely wouldn't be easy to open source due to the usage of closed proprietary IP like the <ARM ISA>
  * dual core <ARM Cortex-M0+>
  * frequency: 2 kHz to 133 MHz, 125 MHz by default
  * memory: 264KB on-chip <SRAM>
* GPIO voltage: 3.3V

Datasheet: https://datasheets.raspberrypi.com/pico/pico-datasheet.pdf

\Image[https://web.archive.org/web/20220808214856im_/https://twilio-cms-prod.s3.amazonaws.com/images/6ofE97USO9rBn4LidgxTgfrAqK0UiI3v524IPNHc7ac3SA.width-800.png]
{source=https://datasheets.raspberrypi.com/pico/Pico-R3-A4-Pinout.pdf}

= Raspberry Pi Pico variant
{c}
{parent=Raspberry Pi Pico}
{title2=2022}

= Raspberry Pi Pico H
{c}
{parent=Raspberry Pi Pico variant}

Has <serial wire debug> debug. Why would you ever get one without unless you are a clueless newbie like <Ciro Santilli>?!?!

= Raspberry Pi Pico W
{c}
{parent=Raspberry Pi Pico variant}
{title2=2022}

Datasheet: https://datasheets.raspberrypi.com/picow/pico-w-datasheet.pdf

= Raspberry Pi Pico W UART
{parent=Raspberry Pi Pico W}
{tag=UART}

You can connect form an <Ubuntu 22.04> host as:
``
screen /dev/ttyACM0 115200
``
When in `screen`, you can Ctrl + C to kill `main.py`, and then execution stops and you are left in a Python shell. From there:
* Ctrl + D: reboots
* Ctrl + A K: kills the <GNU screen> window. Execution continues normally
but be aware of: <Raspberry Pi Pico W freezes a few seconds after after screen disconnects from UART>.

Other options:
* <ampy> `run` command, which solves <How to run a MicroPython script from a file on the Raspberry Pi Pico W from the command line?>

= Program Raspberry Pi Pico W with MicroPython
{parent=Raspberry Pi Pico W}
{tag=MicroPython}

= How to run a MicroPython script from a file on the Raspberry Pi Pico W from the command line?
{parent=Program Raspberry Pi Pico W with MicroPython}

The first/only way Ciro could find was with <ampy>: https://stackoverflow.com/questions/74150782/how-to-run-a-micropython-host-script-file-on-the-raspbery-pi-pico-from-the-host/74150783#74150783 That just worked and it worked perfectly!
``
python3 -m pip install --user adafruit-ampy
ampy --port /dev/ttyACM0 run blink.py
``

TODO: possible with <rshell>?

= MicroPython connection tool
{c}
{parent=Program Raspberry Pi Pico W with MicroPython}

= ampy
{parent=MicroPython connection tool}

Source: https://github.com/scientifichackers/ampy

Install on <Ubuntu 22.04>:
``
python3 -m pip install --user adafruit-ampy
``

Bibliography:
* https://www.digikey.co.uk/en/maker/projects/micropython-basics-load-files-run-code/fb1fcedaf11e4547943abfdd8ad825ce

= rshell
{parent=MicroPython connection tool}

https://github.com/dhylands/rshell

= How to exit from repl in rshell?
{parent=rshell}

Ctrl + X. Documented by running `help repl` from the main shell.

= Raspberry Pi Pico W freezes a few seconds after after screen disconnects from UART
{c}
{parent=Program Raspberry Pi Pico W with MicroPython}

* https://stackoverflow.com/questions/74081960/raspberry-pico-w-micropython-execution-freezes-a-few-seconds-after-disconnecting
* https://github.com/orgs/micropython/discussions/9633

= Program Raspberry Pi Pico W with MicroPython code from the command line
{parent=Program Raspberry Pi Pico W with MicroPython}

https://stackoverflow.com/questions/66183596/how-can-you-make-a-micropython-program-on-a-raspberry-pi-pico-autorun/74078142#74078142

Examples at: <Raspberry Pi Pico W MicroPython example>.

= Program the Raspberry Pi Pico W with MicroPython from Thonny
{parent=Program Raspberry Pi Pico W with MicroPython}

https://stackoverflow.com/questions/66183596/how-can-you-make-a-micropython-program-on-a-raspberry-pi-pico-autorun/74078142#74078142

Examples at: <Raspberry Pi Pico W MicroPython example>.

= Raspberry Pi Pico W MicroPython example
{c}
{parent=Program Raspberry Pi Pico W with MicroPython}

An upstream repo at: https://github.com/raspberrypi/pico-micropython-examples

Our examples at: \a[rpi-pico-w/upython].

The examples can be run as described at <Program Raspberry Pi Pico W with MicroPython>.
* \a[rpi-pico-w/upython/blink.py]: blink on-board <LED>. Note that they broke the LED hello world compatibility from non-W to W for God's sake!!!
* \a[rpi-pico-w/upython/led_on.py]: turn on-board LED on and leave it on forever
* \a[rpi-pico-w/upython/uart.py]: has automatic <UART> via USB. Any `print()` command ends up on the <Raspberry Pi Pico W UART>! Is is just like with <Micro Bit>, must be a standard Micro Python thing. The onboard LED is blinked as a <heartbeat (computing)>.
* \a[rpi-pico-w/upython/blink_gpio.py]: toggle GPIO pin 0 on and off twice a second. Also toggle the on-board LED and print to UART for correlation. You can see this in action e.g. by linking an LED between pin 0 and one of the GND pins of the Pi, and the LED will blink.
* \a[rpi-pico-w/upython/pwm.py]: <pulse width modulation>. Using the same circuit as the \a[rpi-pico-w/upython/blink_gpio.py] example, you will now see the external LED go from dark to bright continuously  and then back
* \a[rpi-pico-w/upython/adc.py]: <analog-to-digital converter>. The program prints to the <UART> the value of the ADC on GPIO 26 once every 0.2 seconds. The onboard LED is blinked as a <heartbeat (computing)>. The hello world is with a <potentiometer>: extremes on GND and VCC pins of the Pi, and middle output on pin 26, then as you turn the knob, the uart value goes from about 0 to about 64k.

= Program Raspberry Pi Pico W with C
{parent=Raspberry Pi Pico W}
{tag=MicroPython}

* https://www.raspberrypi.com/documentation/microcontrollers/c_sdk.html
* https://github.com/raspberrypi/pico-sdk
* https://github.com/raspberrypi/pico-examples The key hello world examples are:
  * https://github.com/raspberrypi/pico-examples/tree/a7ad17156bf60842ee55c8f86cd39e9cd7427c1d/hello_world/usb
  * https://github.com/raspberrypi/pico-examples/tree/a7ad17156bf60842ee55c8f86cd39e9cd7427c1d/blink

<Ubuntu 22.04> build just worked, nice! Much feels much cleaner than the <Micro Bit> C setup:
``
sudo apt install cmake gcc-arm-none-eabi libnewlib-arm-none-eabi libstdc++-arm-none-eabi-newlib

git clone https://github.com/raspberrypi/pico-sdk
cd pico-sdk
git checkout 2e6142b15b8a75c1227dd3edbe839193b2bf9041
cd ..

git clone https://github.com/raspberrypi/pico-examples
cd pico-examples
git checkout a7ad17156bf60842ee55c8f86cd39e9cd7427c1d
cd ..

export PICO_SDK_PATH="$(pwd)/pico-sdk"
cd pico-exampes
mkdir build
cd build
# Board selection.
# https://www.raspberrypi.com/documentation/microcontrollers/c_sdk.html also says you can give wifi ID and password here for W.
cmake -DPICO_BOARD=pico_w ..
make -j
``

Then we install the programs just like any other <UF2> but plugging it in with BOOTSEL pressed and copying the UF2 over, e.g.:
``
cp pico_w/blink/picow_blink.uf2 /media/$USER/RPI-RP2/
``
Note that there is a separate example for the W and non W LED, for non-W it is:
``
cp blink/blink.uf2 /media/$USER/RPI-RP2/
``

Also tested the UART over USB example:
``
cp hello_world/usb/hello_usb.uf2 /media/$USER/RPI-RP2/
``
You can then see the UART messages with:
``
screen /dev/ttyACM0 115200
``

TODO understand the proper debug setup, and a flash setup that doesn't require us to plug out and replug the thing every two seconds. https://www.electronicshub.org/programming-raspberry-pi-pico-with-swd/ appears to describe it, with SWD to do both debug and flash. To do it, you seem need another board with <GPIO>, e.g. a <Raspberry Pi>, the laptop alone is not enough.

= Peripheral
{parent=Computer hardware}
{wiki}

= Computer mouse
{parent=Peripheral}
{tag=I O device}
{wiki}

= Computer keyboard
{parent=Peripheral}
{tag=I O device}
{wiki}

= Keyboard layout
{parent=Computer keyboard}
{wiki}

= QWERTY
{c}
{parent=Keyboard layout}
{wiki}

= Dvorak keyboard layout
{c}
{parent=Keyboard layout}
{tag=Good}
{tag=Idealism}
{wiki}

Dvorak users will automatically go to <Heaven>.

= Computer keyboard model
{parent=Computer keyboard}

= Kinesis Advantage 2 keyboard
{c}
{parent=Computer keyboard model}

https://kinesis-ergo.com/shop/advantage2/

For <Ciro Santilli>, this is not a <computer keyboard>. It is a <fetish>.

= Webcam
{parent=Peripheral}
{wiki}

= Peripheral interface
{parent=Peripheral}

= PCI
{c}
{parent=Peripheral interface}
{wiki}

\Video[https://www.youtube.com/watch?v=PrXwe21biJo]
{title=PCIe computer explained by <ExplainingComputers> (2018)}

= PCIe
{c}
{parent=PCI}
{wiki}

= lspci
{c}
{parent=PCI}

`lspci` is the name of several versions of <CLI tools> used in <UNIX>-like systems to query information about <PCI> devices in the system.

On <Ubuntu 23.10>, it is provided by the <pciutils> package, which is so dominant that when we say "lspci" without qualitication, that's what we mean.

= pciutils
{c}
{parent=lspci}

Sotware project that provides <lspci>.

= Get vendor and device ID for each PCI device
{parent=lspci}

https://stackoverflow.com/questions/59010671/how-to-get-vendor-id-and-device-id-of-all-pci-devices
``
grep PCI_ID /sys/bus/pci/devices/*/uevent
``

<lspci> is missing such basic functionality!

= USB
{c}
{parent=Peripheral interface}
{wiki}

= USB Micro-B
{c}
{parent=USB}

= USB-C
{c}
{parent=USB}
{tag=Good}
{wiki}

= Semiconductor industry
{parent=Computer hardware}
{wiki}

= Film about the semiconductor industry
{parent=Semiconductor industry}
{tag=Business film}

= Halt and Catch Fire
{disambiguate=TV series}
{parent=Film about the semiconductor industry}
{tag=Business film}
{title2=2014-2017}
{wiki}

Season 1 was amazing. The others fell off a bit.

= Semiconductor company
{parent=Semiconductor industry}
{tag=Company}

This section is about companies that design <semiconductors>.

For companies that manufature semiconductors, see also: <company with a semiconductor fabrication plant>.

= Acorn Computers
{c}
{parent=Semiconductor company}
{wiki}

= AMD
{c}
{parent=Semiconductor company}
{tag=American company}
{title2=1969}
{wiki}

\Video[https://www.youtube.com/watch?v=Rtb4mjIACTY]
{title=How <AMD> went from nearly Bankrupt to Booming by Brandon Yen (2021)}
{description=
* https://youtu.be/Rtb4mjIACTY?t=118 Buldozer series CPUs was a disaster
* https://youtu.be/Rtb4mjIACTY?t=324 got sued for marketing claims on number of cores vs number of <hyperthreads>
* https://youtu.be/Rtb4mjIACTY?t=556 Ryzen first gen was rushed and a bit buggy, but it had potential. Gen 2 fixed those.
* https://youtu.be/Rtb4mjIACTY?t=757 Ryzen Gen 3 surpased single thread performance of Intel. Previously Gen 2 had won multicore.
}

= AMD product
{c}
{parent=AMD}

= AMD CPU
{c}
{parent=AMD product}
{wiki}

They have been masters of second sourcing things for a long time! One can ony imagine the complexity of the <Intel> cross licensing deals.

= Ryzen
{c}
{parent=AMD CPU}
{wiki}

This was the CPU architecure that saved AMD in the 2010's, see also: <video How AMD went from nearly Bankrupt to Booming by Brandon Yen (2021)>

= Epyc
{c}
{parent=AMD CPU}
{wiki}

= AMD GPU
{c}
{parent=AMD product}
{wiki}

= AMD GPU driver
{c}
{parent=AMD GPU}

= AMDGPU
{c}
{parent=AMD GPU driver}
{wiki=AMDgpu_(Linux_kernel_module)}

Bibliography:
* https://wiki.archlinux.org/title/AMDGPU
* https://gitlab.freedesktop.org/drm/amd an issue tracker
* https://github.com/ROCm/ROCK-Kernel-Driver TODO vs the GitLab?

= RDNA
{c}
{parent=AMD GPU}
{wiki=RDNA_(microarchitecture)}

= RDNA 3
{c}
{parent=RDNA}
{title2=2022}

= gfx1103
{parent=RDNA 3}

Mentioned e.g. at: https://videocardz.com/newz/amd-begins-rdna3-gfx11-graphics-architecture-enablement-for-llvm-project as being part of <RDNA 3>.

= Radeon
{c}
{parent=AMD GPU}
{wiki}

= AMD Instinct
{c}
{parent=AMD GPU}
{wiki}

= ATI Technologies
{c}
{parent=AMD GPU}
{tag=Canadian company}
{title2=1985-2006}
{wiki}

= AMD employee
{c}
{parent=AMD}
{wiki}

= Jerry Sanders
{c}
{parent=AMD employee}
{title2=AMD co-founder and CEO until 2002}
{wiki=Jerry_Sanders_(businessman)}

\Video[https://www.youtube.com/watch?v=HqWWoaA8pIs]
{title=AMD Founder Jerry Sanders Interview (2002)}
{description=
Source: https://exhibits.stanford.edu/silicongenesis/catalog/hr396zc0393[]. Fun to watch.
* https://youtu.be/HqWWoaA8pIs?t=779 https://en.wikipedia.org/wiki/Newton_N._Minow[Newton Minow] mandated <UHF> on all television sets in 1961, and the <oscillator> needed for the tuner was one of the first major non-military products from <Fairchild>, the 28918 (?).
* https://youtu.be/HqWWoaA8pIs?t=1053 Fairchild had won the first round of a <Minuteman> contract, but lost the second one due to poor management
}

= Lisa Su
{c}
{parent=AMD employee}
{wiki}

= Arm
{disambiguate=company}
{c}
{parent=Semiconductor company}
{wiki}

= Arm Ltd.
{c}
{synonym}

\Video[https://www.youtube.com/watch?v=FCmnWTlDK6M]
{title=Arm 30 Years On: Episode One by <Arm Ltd.> (2022)}

\Video[https://www.youtube.com/watch?v=w_CiSKUFvcg]
{title=Arm 30 Years On: Episode Two by <Arm Ltd.> (2022)}

\Video[https://www.youtube.com/watch?v=QmHpoi4BVwM]
{title=Arm 30 Years On: Episode Three by <Arm Ltd.> (2022)}
{description=This one is boring US expansion. Other two are worth it.}

= Allen Wu
{c}
{parent=Arm (company)}

https://www.linkedin.com/in/allenxwu

This situation is the most bizarre thing ever. The dude was fired in 2020, but he refused to be fired, and because he has the company seal, they can't fire him. He is still going to the office as of 2022. It makes one wonder what are the true political causes for this situation. A big warning sign to all companies tring to setup joint ventures in <China>!

* 2022 https://www.reuters.com/technology/arm-china-says-its-ousted-ceo-wu-is-refusing-pack-up-2022-05-05/

\Video[https://www.youtube.com/watch?v=uLzjZoS-jCs]
{title=ARM Fired ARM China’s CEO But He Won’t Go by <Asianometry> (2021)}

= Arm product
{c}
{parent=Arm (company)}

= Arm Artisan
{c}
{parent=Arm product}
{wiki}

= ARM CPU
{c}
{parent=Arm product}
{wiki}

= ARM Cortex-M
{c}
{parent=Arm CPU}
{wiki}

= ARM Cortex-M0+
{c}
{parent=ARM Cortex-M}
{wiki}

= Broadcom
{c}
{parent=Semiconductor company}
{tag=HP spinoff}
{wiki}

= Cerebras
{c}
{parent=Semiconductor company}
{tag=Fabless semiconductor company}
{title2=2015-}
{wiki}

For some reason they attempt to make a single chip on an entire <wafer>!

They didn't care about <MLPerf> as of 2019: https://www.zdnet.com/article/cerebras-did-not-spend-one-minute-working-on-mlperf-says-ceo/

* 2023: https://www.eetimes.com/cerebras-sells-100-million-ai-supercomputer-plans-8-more/ Cerebras Sells \$100 Million AI Supercomputer, Plans Eight More

\Image[https://web.archive.org/web/20230613000748if_/https://www.cerebras.net/wp-content/uploads/2022/03/Chip-comparison-01-uai-1032x1032.jpg]
{source=https://www.cerebras.net/product-chip/}

= Graphcore
{c}
{parent=Semiconductor company}
{wiki}

= Intel
{c}
{parent=Semiconductor company}
{tag=Company with a semiconductor fabrication plant}
{title2=1968-}
{wiki}

= Intel GPU
{c}
{parent=Intel}

= Intel discrete GPU
{c}
{parent=Intel GPU}

= Intel Xe
{c}
{parent=Intel discrete GPU}
{wiki}

= Intel Arc
{c}
{parent=Intel discrete GPU}
{wiki}

\Video[https://www.youtube.com/watch?v=MjYSeT-T5uk]
{title=Worst We've Tested: Broken Intel Arc GPU Drivers by Gamers Nexus (2022)}

= Intel Graphics Technology
{c}
{parent=Intel GPU}
{title2=Intel integrated GPUs}
{wiki}

= Intel Research
{c}
{parent=Intel}
{wiki=Intel_Research_Lablets}

= Intel Research Lablets
{c}
{synonym}
{title2}

"Intel Research Lablets", that's a terrible name.

= Nvidia
{c}
{parent=Semiconductor company}
{wiki}

Open source <driver (software)>/hardware interface specification??? E.g. on <Ubuntu>, a large part of the nastiest UI breaking bugs <Ciro Santilli> encountered over the years have been GPU related. Do you think that is a coincidence??? E.g. <ubuntu 21.10 does not wake up from suspend>.

\Video[https://www.youtube.com/watch?v=_36yNWw_07g]
{title=<Linus Torvalds> saying "Nvidia Fuck You" (2012)}

\Video[https://www.youtube.com/watch?v=TRZqE6H-dww]
{title=How Nvidia Won Graphics Cards by <Asianometry> (2021)}
{description=
* <Doom (video game)> was the first <killer app> of <personal computer> 3D graphics! As opposed to professional rendering e.g. for <CAD> as was supported by <Silicon Graphics>
* https://youtu.be/TRZqE6H-dww?t=694 they bet on <Direct3D>
* https://youtu.be/TRZqE6H-dww?t=749 they wrote their own drivers. At the time, most <driver (software)>[drivers] were written by the <computer manufacturers>. That's insane!
}

\Video[https://www.youtube.com/watch?v=GuV-HyslPxk&list=WL]
{title=How Nvidia Won AI by <Asianometry> (2022)}

= Software developed by Nvidia
{c}
{parent=Nvidia}
{tag=Command line utility}

= nvidia-smi
{c}
{parent=Software developed by Nvidia}
{tag=Command line utility}

= Nvidia GPU
{c}
{parent=Nvidia}

= Nvidia Tesla
{c}
{parent=Nvidia GPU}
{tag=GPGPU}
{wiki}

= Nvidia T4
{c}
{parent=Nvidia Tesla}

= Nvidia A10G
{c}
{parent=Nvidia Tesla}

= Qualcomm
{c}
{parent=Semiconductor company}
{wiki}

<Ciro Santilli> has always had a good impression of these people.

= Silicon Graphics
{c}
{parent=Semiconductor company}
{title2=1981-2009}
{wiki}

This company is a bit like <Sun Microsystems>, you can hear a note of awe in the voice of those who knew it at its peak. This was a bit before <Ciro Santilli>'s awakening.

Those people created <OpenGL> for <God>'s sake! Venerable.

Both of them and Sun kind of died in the same way, unable to move from the <workstation> to the <personal computer> fast enough, and just got killed by the scale of competitors who did, notably <NVIDIA> for graphics cards.

Some/all <Nintendo 64 games> were developed on it, e.g. it is well known that this was the case for <Super Mario 64>.

Also they were a big <UNIX> vendor, which is another kudos to the company.

\Video[https://www.youtube.com/watch?v=Oy-kE0dq1cE]
{title=<Silicon Graphics> Promo (1987)}
{description=Highlights that this was one of the first widely available options for professional engineers/designers to do real-time 3D rendering for their designs. Presumably before it, you had to do use scripting to CPU render and do any changes incrementally by modifying the script.}

= Chinese semiconductor industry
{c}
{parent=Semiconductor industry}
{tag=China}

\Video[https://www.youtube.com/watch?v=zd6iZFPiCFQ]
{title=<China>'s Making x86 Processors by <Asianometry> (2021)}