Skip to main content

Efficient ASIC Implementation of Artificial Neural Network with Posit Representation of Floating-Point Numbers

  • Conference paper
  • First Online:
Next Generation Systems and Networks (BITS-EEE-CON 2022)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 641))

Included in the following conference series:

  • 174 Accesses

Abstract

This paper presents a low-power ASIC architecture of a feedforward Artificial Neural Network using Posit representation. The ASIC Posit shows 50% improvement over ASIC using IEEE 754 format in terms of Power and Silicon Area and is also 13% faster while achieving the same accuracy. The same design using the FPGA platform consumes more power than the ASIC design. The designs are done using Cadence RTL Encounter with TSMC 180 nm technology node.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Gustafon JL, Yonemoto IT (2017) Beating floating point at its own game: posit arithmetic. Supercomput Front Innov 4(2):71–86

    Google Scholar 

  2. Nambi S, Ullah S, Sahoo SS, Lohana A, Merchant F, Kumar A (2021) ExPAN(N)D: exploring posits for efficient artificial neural network design in FPGA-based systems. IEEE Access 9:103691–103708

    Article  Google Scholar 

  3. Cococcioni M, Rossi F, Ruffaldi E, Saponara S (2019) Novel arithmetics to accelerate machine learning classifiers in autonomous driving applications. In: 2019 26th IEEE International conference on electronics, circuits and systems (ICECS), Genoa, Italy

    Google Scholar 

  4. Mishra SM, Tiwari A, Shekhawat HS, Guha P, Trivedi G, Jan P, Nemec Z (2022) Comparison of floating-point representations for the efficient implementation of machine learning algorithms. In: 32nd International conference radioelectronika (RADIOELECTRONIKA), Kosice, Slovakia

    Google Scholar 

  5. Han J, Orshansky M (2013) Approximate computing: an emerging paradigm for energy-efficient design. In: 2013 18th IEEE European test symposium (ETS), Avignon, France

    Google Scholar 

  6. Xu Q, Mytkowicz T, Kim NS (2016) Approximate computing: a survey. IEEE Des Test 33(1):8–22

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anu Gupta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gupta, A., Gupta, A., Gupta, R. (2023). Efficient ASIC Implementation of Artificial Neural Network with Posit Representation of Floating-Point Numbers. In: Bansal, H.O., Ajmera, P.K., Joshi, S., Bansal, R.C., Shekhar, C. (eds) Next Generation Systems and Networks. BITS-EEE-CON 2022. Lecture Notes in Networks and Systems, vol 641. Springer, Singapore. https://doi.org/10.1007/978-981-99-0483-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-0483-9_5

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-0482-2

  • Online ISBN: 978-981-99-0483-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics