JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CPHC-CONF Archives


CPHC-CONF Archives

CPHC-CONF Archives


cphc-conf@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CPHC-CONF Home

CPHC-CONF Home

CPHC-CONF  November 2019

CPHC-CONF November 2019

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

DEADLINE EXTENDED: AccML'20 - HiPEAC Workshop on Accelerated Machine Learning 2020

From:

Jose Cano Reyes <[log in to unmask]>

Reply-To:

Jose Cano Reyes <[log in to unmask]>

Date:

Fri, 8 Nov 2019 22:25:53 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (1 lines)

==================================================================



Workshop on Accelerated Machine Learning (AccML)



Co-located with the HiPEAC 2020 Conference

(https://www.hipeac.net/2020/bologna/)



January 20, 2020

Bologna, Italy

==================================================================



       UPDATE: DEADLINE EXTENSION TO NOVEMBER 22, 2019



-------------------------------------------------------------------------

CALL FOR CONTRIBUTIONS

-------------------------------------------------------------------------

In the last 5 years, the remarkable performance achieved in a variety of 

application areas (natural language processing, computer vision, games, 

etc.) has led to the emergence of heterogeneous architectures to 

accelerate machine learning workloads. In parallel, production 

deployment, model complexity and diversity pushed for higher 

productivity systems, more powerful programming abstractions, software 

and system architectures, dedicated runtime systems and numerical 

libraries, deployment and analysis tools. Deep learning models are 

generally memory and computationally intensive, for both training and 

inference. Accelerating these operations has obvious advantages, first 

by reducing the energy consumption (e.g. in data centers), and secondly, 

making these models usable on smaller devices at the edge of the 

Internet. In addition, while convolutional neural networks have 

motivated much of this effort, numerous applications and models involve 

a wider variety of operations, network architectures, and data 

processing. These applications and models permanently challenge computer 

architecture, the system stack, and programming abstractions. The high 

level of interest in these areas calls for a dedicated forum to discuss 

emerging acceleration techniques and computation paradigms for machine 

learning algorithms, as well as the applications of machine learning to 

the construction of such systems.





-------------------------------------------------------------------------

Links to the Workshop pages

-------------------------------------------------------------------------

HiPEAC: https://www.hipeac.net/2020/bologna/#/schedule/sessions/7739/



Organizers: http://workshops.inf.ed.ac.uk/accml/





-------------------------------------------------------------------------

Speakers

-------------------------------------------------------------------------



* Keynote speaker: Luca Benini (ETH Zurich and U. di Bologna)



Title: Extreme Edge AI on Open Hardware



Abstract: Edge Artificial Intelligence (AI) is the new mega-trend, as 

privacy concerns and network bandwidth/latency bottlenecks prevent cloud 

offloading of AI functions in many application domains, from autonomous 

driving to advanced prosthetics. Hence we need to push AI toward sensors 

and actuators. I will give an overview of recent efforts in developing 

systems of-on-chips based on open source hardware and  capable of 

significant analytics and AI functions "at the extreme edge", i.e. 

within the limited power budget of traditional microcontrollers that can 

be co-located and integrated with the sensors/actuators themselves.  

These open, extreme edge AI platforms create an exciting playground for 

research and innovation.



Bio: Luca Benini holds the chair of digital Circuits and systems at ETHZ 

and is Full Professor at the Universita di Bologna. He received a PhD 

from Stanford University. In 2009-2012 he served as chief architect in 

STmicroelectronics France.  Dr. Benini's research interests are in 

energy-efficient computing systems design, from embedded to 

high-performance. He is also active in the design of ultra-low power 

VLSI Circuits and smart sensing micro-systems. He has published more 

than 1000 peer-reviewed papers and five books. He is an ERC-advanced 

grant winner, a Fellow of the IEEE, of the ACM and a member of the 

Academia Europaea. He is the recipient of the 2016 IEEE CAS Mac Van 

Valkenburg award and of the  2019 IEEE TCAD Donald O. Pederson Best 

Paper Award.



---



* Invited speaker: Carole-Jean Wu (Facebook AI, Arizona State University)



Title: Machine Learning at Scale



Abstract: Machine learning systems are being widely deployed in 

production datacenter infrastructure and over billions of edge devices. 

This talk seeks to address key system design challenges when scaling 

machine learning solutions to billions of people. What are key 

similarities and differences between cloud and edge infrastructure? The 

talk will conclude with open system research directions for deploying 

machine learning at scale.



Bio: Carole-Jean Wu is a Research Scientist at Facebook’s AI 

Infrastructure Research. She is also a tenured Associate Professor of 

CSE in Arizona State University. Carole-Jean’s research focuses in 

Computer and System Architectures. More recently, her research has 

pivoted into designing systems for machine learning. She is the leading 

author of “Machine Learning at Facebook: Understanding Inference at the 

Edge” that presents unique design challenges faced when deploying ML 

solutions at scale to the edge, from over billions of smartphones to 

Facebook’s virtual reality platforms. Carole-Jean received her Ph.D. and 

M.A. from Princeton and B.Sc. from Cornell.



---



* Invited speaker: Albert Cohen (Google, Paris)



Title: Abstractions, Algorithms and Infrastructure for Post-Moore 

Optimizing Compilers



Abstract: MLIR is a recently announced open source infrastructure to 

accelerate innovation in machine learning (ML) and high-performance 

computing (HPC). It addresses the growing software and hardware 

fragmentation across machine learning frameworks, enabling machine 

learning models to be consistently represented and executed on any type 

of hardware. It also unifies graph representations and operators for ML 

and HPC. It facilitates the design and implementation of code 

generators, translators and optimizations at different levels of 

abstraction and also across application domains, hardware targets and 

execution environments. We will share our vision, progress and plans in 

the MLIR project, zooming in on graph-level and loop nest optimization 

as illustrative examples.



Bio: Albert Cohen is a research scientist at Google. He worked as a 

research scientist at Inria from 2000 to 2018. He graduated from École 

Normale Supérieure de Lyon and received his PhD from the University of 

Versailles in 1999 (awarded two national prizes). He has also been a 

visiting scholar at the University of Illinois, an invited professor at 

Philips Research, and a visiting scientist at Facebook Artificial 

Intelligence Research. Albert Cohen works on parallelizing and 

optimizing compilers, parallel programming languages and systems, and 

synchronous programming for reactive control systems. He served as the 

general or program chair of some of the main conferences in the area and 

a member of the editorial board of two journals. He co-authored more 

than 180 peer-reviewed papers and has been the advisor for 26 PhD 

theses. Several research projects led by Albert Cohen resulted in 

effective transfer to production compilers and programming environments.



---



* Invited speaker: Rune Holm (Arm)



Title: Big neural networks in small spaces; Towards end-to-end 

optimisation for ML at the edge



Abstract: Neural networks have taken over use case after use case, from 

image recognition, speech recognition, image enhancement to driving 

cars, and show no sign of letting up. Yet so many of these use cases are 

done by acquiring data and sending it off to the cloud for inference. 

On-device ML brings unprecedented capabilities and opportunities to edge 

devices with improved privacy, security, and reliability. This talk 

explores the many aspects of system optimisation for edge ML, from 

training-time optimisation, to the compilation of neural networks, to 

the design of machine learning hardware, and looks at ways to save 

execution time and memory footprint while preserving accuracy.



Bio: Rune Holm has been part of the semiconductor industry for more than 

a decade. He started out on Mali GPUs, doing GPU microarchitecture and 

designing shader compilers for VLIW cores. He then moved on to research 

into experimental GPGPU designs and architectures targeting HPC, machine 

learning and computer vision. He’s currently part of the Arm Machine 

Learning Group, focusing on neural network accelerator architecture and 

compilers optimising for these designs.





-------------------------------------------------------------------------

Topics

-------------------------------------------------------------------------

Topics of interest include (but are not limited to):



- Novel ML systems?: heterogeneous multi/many-core systems, GPUs, FPGAs;

- Software ML acceleration: languages, primitives, libraries, compilers 

and frameworks;

- Novel ML hardware accelerators and associated software;

- Emerging semiconductor technologies with applications to ML hardware 

acceleration;

- ML for the construction and tuning of systems;

- Cloud and edge ML computing: hardware and software to accelerate 

training and inference;

- Computing systems research addressing the privacy and security of 

ML-dominated systems.





-------------------------------------------------------------------------

Submission

-------------------------------------------------------------------------

Papers will be reviewed by the workshop's technical program committee 

according to criteria regarding a submission's quality, relevance to the 

workshop's topics, and, foremost, its potential to spark discussions 

about directions, insights, and solutions in the context of accelerating 

machine learning. Research papers, case studies, and position papers are 

all welcome.

In particular, we encourage authors to submit works-In-Progress papers: 

To facilitate sharing of thought-provoking ideas and high-potential 

though preliminary research, authors are welcome to make submissions 

describing early-stage, in-progress, and/or exploratory work in order to 

elicit feedback, discover collaboration opportunities, and generally 

spark discussion.



The workshop does not have formal proceedings.





-------------------------------------------------------------------------

Important Dates

-------------------------------------------------------------------------

Submission deadline: November 22, 2019

Notification of decision: December 6, 2019





-------------------------------------------------------------------------

Organizers

-------------------------------------------------------------------------

José Cano (University of Glasgow)

Valentin Radu (University of Edinburgh)

Marco Cornero (DeepMind)

Albert Cohen (Google)

Olivier Temam (DeepMind)

Alex Ramirez (Google)





########################################################################



To unsubscribe from the cphc-conf list, click the following link:

https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=cphc-conf&A=1

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager