Jump to content

Nir Shavit

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Graeme Bartlett (talk | contribs) at 10:48, 22 February 2023 (Recognition: clean up, typo(s) fixed: alongwith → along with). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Nir Shavit
Alma materTechnion, Hebrew University of Jerusalem
Known forSoftware transactional memory, wait-free algorithms
AwardsGödel prize, Dijkstra prize
Scientific career
FieldsComputer science: concurrent and parallel computing
Thesis (1990)
Websitewww.cs.tau.ac.il/~shanir/

Nir Shavit (Hebrew: ניר שביט) is an Israeli computer scientist. He is a professor in the Computer Science Department at Tel Aviv University and a professor of electrical engineering and computer science at the Massachusetts Institute of Technology.

Nir Shavit received B.Sc. and M.Sc. degrees in computer science from the Technion - Israel Institute of Technology in 1984 and 1986, and a Ph.D. in computer science from the Hebrew University of Jerusalem in 1990. Shavit is a co-author of the book The Art of Multiprocessor Programming, is a winner of the 2004 Gödel Prize in theoretical computer science for his work on applying tools from algebraic topology to model shared memory computability, and a winner of the 2012 Dijkstra Prize for the introduction and first implementation of software transactional memory. He is a past program chair of the ACM Symposium on Principles of Distributed Computing (PODC) and the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA).

His research covers techniques for designing, implementing, and reasoning about multiprocessors, and in particular the design of concurrent data structures for multi-core machines.

Recognition

Currently he has co-founded a company named Neural Magic along with Alexzander Mateev. The company claims to use highly sparse neural networks to make deep learning computationally so efficient that GPUs won't be needed. For certain use cases they claim a speed up of 175x.[2]

References

  1. ^ ACM Names Fellows for Computing Advances that Are Transforming Science and Society Archived 2014-07-22 at the Wayback Machine, Association for Computing Machinery, accessed 2013-12-10.
  2. ^ "The Future of Deep Learning is Sparse. - Neural Magic". 12 July 2019.