Tera For Mac

TeraCopy is a utility designed to copy files faster and more reliably, providing the user with many features. Amazon.com: Tera Grand Mini DisplayPort / Thunderbolt to DVI Adapter Cable for Apple MacBook, MacBook Pro, MacBook Air, iMac, Mac mini, Mac Pro, and Microsoft Surface Pro.

In computing, especially digital signal processing, the multiply–accumulate operation is a common step that computes the product of two numbers and adds that product to an accumulator. The hardware unit that performs the operation is known as a multiplier–accumulator (MAC, or MAC unit); the operation itself is also often called a MAC or a MAC operation. The MAC operation modifies an accumulator a:

aa+(b×c){displaystyle aleftarrow a+(btimes c)}

When done with floating point numbers, it might be performed with two roundings (typical in many DSPs), or with a single rounding. When performed with a single rounding, it is called a fused multiply–add (FMA) or fused multiply–accumulate (FMAC).

Modern computers may contain a dedicated MAC, consisting of a multiplier implemented in combinational logic followed by an adder and an accumulator register that stores the result. The output of the register is fed back to one input of the adder, so that on each clock cycle, the output of the multiplier is added to the register. Combinational multipliers require a large amount of logic, but can compute a product much more quickly than the method of shifting and adding typical of earlier computers. Percy Ludgate was the first to conceive a MAC in his Analytical Machine of 1909,[1] and the first to exploit a MAC for division (using multiplication seeded by reciprocal, via the convergent series (1+x)−1). The first modern processors to be equipped with MAC units were digital signal processors, but the technique is now also common in general-purpose processors.

In floating-point arithmetic[edit]

Mac

When done with integers, the operation is typically exact (computed modulo some power of two). However, floating-point numbers have only a certain amount of mathematical precision. That is, digital floating-point arithmetic is generally not associative or distributive. (See Floating point § Accuracy problems.)Therefore, it makes a difference to the result whether the multiply–add is performed with two roundings, or in one operation with a single rounding (a fused multiply–add). IEEE 754-2008 specifies that it must be performed with one rounding, yielding a more accurate result.[2]

Fused multiply–add[edit]

A fused multiply–add (FMA or fmadd)[3]is a floating-point multiply–add operation performed in one step, with a single rounding. That is, where an unfused multiply–add would compute the product b×c, round it to N significant bits, add the result to a, and round back to N significant bits, a fused multiply–add would compute the entire expression a+b×c to its full precision before rounding the final result down to N significant bits.

A fast FMA can speed up and improve the accuracy of many computations that involve the accumulation of products:

  • Polynomial evaluation (e.g., with Horner's rule)
  • Newton's method for evaluating functions (from the inverse function)
  • Convolutions and artificial neural networks

Tera Term Download For Mac

Fused multiply–add can usually be relied on to give more accurate results. However, William Kahan has pointed out that it can give problems if used unthinkingly.[4] If x2y2 is evaluated as ((x×x) − y×y) using fused multiply–add, then the result may be negative even when x = y due to the first multiplication discarding low significance bits. This could then lead to an error if, for instance, the square root of the result is then evaluated.

When implemented inside a microprocessor, an FMA can be faster than a multiply operation followed by an add. However, standard industrial implementations based on the original IBM RS/6000 design require a 2N-bit adder to compute the sum properly.[5]

Another useful benefit of including this instruction is that it allows an efficient software implementation of division (see division algorithm) and square root (see methods of computing square roots) operations, thus eliminating the need for dedicated hardware for those operations.[6]

Dot product instruction[edit]

Some machines combine multiple fused multiply add operations into a single step, e.g. performing a four-element dot-product on two 128-bit SIMD registers a0×b0+a1×b1+a2×b2+a3×b3 with single cycle throughput.

Support[edit]

The FMA operation is included in IEEE 754-2008.

The Digital Equipment Corporation (DEC) VAX's POLY instruction is used for evaluating polynomials with Horner's rule using a succession of multiply and add steps. Instruction descriptions do not specify whether the multiply and add are performed using a single FMA step.[7] This instruction has been a part of the VAX instruction set since its original 11/780 implementation in 1977.

The 1999 standard of the C programming language supports the FMA operation through the fma() standard math library function, and standard pragmas (#pragma STDC FP_CONTRACT) controlling optimizations based on FMA.

The fused multiply–add operation was introduced as 'multiply–add fused' in the IBM POWER1 (1990) processor,[8] but has been added to numerous other processors since then:

  • HPPA-8000 (1996) and above
  • HitachiSuperH SH-4 (1998)
  • SCE-ToshibaEmotion Engine (1999)
  • Intel Itanium (2001)
  • STI Cell (2006)
  • FujitsuSPARC64 VI (2007) and above
  • (MIPS-compatible) Loongson-2F (2008)[9]
  • Elbrus-8SV (2018)
  • x86 processors with FMA3 and/or FMA4 instruction set
    • AMD Bulldozer (2011, FMA4 only)
    • AMD Piledriver (2012, FMA3 and FMA4)[10]
    • AMD Steamroller (2014)
    • AMD Excavator (2015)
    • AMD Zen (2017, FMA3 only)
    • Intel Haswell (2013, FMA3 only)[11]
    • Intel Skylake (2015, FMA3 only)
  • ARM processors with VFPv4 and/or NEONv2:
    • ARM Cortex-M4F (2010)
    • ARM Cortex-A5 (2012)
    • ARM Cortex-A7 (2013)
    • ARM Cortex-A15 (2012)
    • Qualcomm Krait (2012)
    • Apple A6 (2012)
    • All ARMv8 processors
      • Fujitsu A64FX has 'Four-operand FMA with Prefix Instruction'.
  • GPUs and GPGPU boards:
    • Advanced Micro Devices GPUs (2009) and newer
      • TeraScale 2 'Evergreen'-series based
      • Graphics Core Next-based
    • NVidia GPUs (2010) and newer
      • Fermi-based (2010)
      • Kepler-based (2012)
      • Maxwell-based (2014)
      • Pascal-based (2016)
      • Volta-based (2017)
    • Intel GPUs since Sandy Bridge
    • Intel MIC (2012)
    • ARM Mali T600 Series (2012) and above
  • Vector Processors:

Tera For Mac

References[edit]

Tera For Mac

Tera Term Alternative For Mac

  1. ^'The Feasibility of Ludgate's Analytical Machine'. Archived from the original on 2019-08-07. Retrieved 2020-08-30.
  2. ^Whitehead, Nathan; Fit-Florea, Alex (2011). 'Precision & Performance: Floating Point and IEEE 754 Compliance for NVIDIA GPUs'(PDF). nvidia. Retrieved 2013-08-31.
  3. ^'fmadd instrs'.
  4. ^Kahan, William (1996-05-31). 'IEEE Standard 754 for Binary Floating-Point Arithmetic'.
  5. ^Quinnell, Eric (May 2007). Floating-Point Fused Multiply–Add Architectures(PDF) (PhD thesis). Retrieved 2011-03-28.
  6. ^Markstein, Peter (November 2004). Software Division and Square Root Using Goldschmidt's Algorithms(PDF). 6th Conference on Real Numbers and Computers. CiteSeerX10.1.1.85.9648.
  7. ^'VAX instruction of the week: POLY'. Archived from the original on 2020-02-13.
  8. ^Montoye, R. K.; Hokenek, E.; Runyon, S. L. (January 1990). 'Design of the IBM RISC System/6000 floating-point execution unit'. IBM Journal of Research and Development. 34 (1): 59–70. doi:10.1147/rd.341.0059.
  9. ^'Godson-3 Emulates x86: New MIPS-Compatible Chinese Processor Has Extensions for x86 Translation'.
  10. ^Hollingsworth, Brent (October 2012). 'New 'Bulldozer' and 'Piledriver' Instructions'. AMD Developer Central.
  11. ^'Intel adds 22nm octo-core 'Haswell' to CPU design roadmap'. The Register. Archived from the original on 2012-03-27. Retrieved 2008-08-19.

Tera For Macos

Retrieved from 'https://en.wikipedia.org/w/index.php?title=Multiply–accumulate_operation&oldid=989374115'