About ICADCML 2020
The first International Conference on Advances in Distributed Computing and Machine Learning(ICADCML-2020) is an annual forum that will bring together ideas, innovations, lessons, etc. associated with distributed computing and machine learning, and their application in different areas. Nowadays, most computing systems are available for parallel and distributed computing. Distributed computing performs an increasingly important role in modern data processing, information fusion, and electronics engineering. Particularly, applying machine learning in distributed environments is becoming an element of high added value and economic potential. Research on Intelligent Distributed Systems has matured during the last decade and many effective applications are now deployed. The machine learning is changing our society. Its application in distributed environments, such as the Internet, electronic commerce, mobile communications, wireless devices, distributed computing, and so on is increasing and is becoming an element of high added value and economic potential, both industrial and research. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both academic and business areas is essential to facilitate the development of systems that meet the demands of today's society. The technology transfer in this field is still a challenge and for that reason, this type of contributions will be specially considered in this symposium. This conference is the forum in which to present the application of innovative techniques to complex problems. The scope of this conference has been kept wide and following are the topics covered (But not limited to):
  • Distributed computing, networking,security and applications
  • Cloud computing system and network design
  • Green computing design and issues
  • Cloud storage design and networking
  • Cloud system and storage security
  • Machine learning, data mining for cloud computing
  • Edge, fog, and mobile edge computing
  • Security, privacy, trust for cloud computing
  • Machine learning approaches and applications
  • Machine learning for distributed computing
  • Machine learning for traffic engineering and congestion control
  • Machine learning for network measurement
  • Internet of Things (IoT) Applications and Services
  • Security and Privacy for Internet of Things (IoT)
  • Blockchain and Decentralized applications
  • Blockchain and trust management
  • Blockchain implementation and application
  • Blockchain architectures and algorithms for heterogeneous networks
  • Blockchain based IoT security solutions
  • Blockchain in edge and cloud computing
  • Blockchain in IoT device identification management
  • Blockchain in IoT device authentication, authorization and access control
  • Blockchain in IoT data security
  • Blockchain in IoT system security
Publication

The Proceedings of ICADCML 2020 will be published in Springer Lecture Notes in Networks and Systems (LNNS) Book series [SCOPUS indexed].
The books of this series are submitted to ISI Proceedings, SCOPUS, Google Scholar and Springerlink for indexing.
Journal Support
Extended versions of selected papers are invited and recomended by the conference for submission to the following SCOPUS indexed journals :
IJCSE (Inderscience Publishers)

IJES (Inderscience Publishers)

Journal of Mobile Multimedia (River Publishers)
Prospective authors are invited to submit either extended version of ICADCML 2020 conference papers or fresh submissions for the forthcoming edited book, 1. The CRC Press : "Cognitive Computing using Green Technologies: Modelling Techniques and Applications",ISBN : 9780367487966  2.  IGI Global : " Building Smart and Secure Environments Through the Fusion of Virtual Reality, Augmented Reality, and the IoT"
Keynote
  • I. Overview of Resampling Methods📑
    by Prof. DVLN Somayajulu,
    Director,IIITDM,Kurnool

    Prof. DVLN Somayajulu
    Abstract: Resampling methods are an indispensable tool in today’s modern statistics. They involve repeatedly drawing samples from a training set and refitting a model of interest on each sample in order to obtain additional information about the fitted model. These approaches may allow us to obtain information that would not be available from fitting the model only once using the original training sample. Resampling approaches can be computationally expensive, because they involve fitting the same statistical method multiple times using different subsets of the training data. However, due to recent advances in computing power, the computational requirements of resampling methods generally are not prohibitive. My talk is confined to elaborate the two of the most commonly used resampling methods, cross-validation and the bootstrap, along with case study using R tool. Both methods are important tools in the practical application of many learning procedures.

  • II. Fog Computing:An Emerging Paradigm📑
    by Prof. Prasanta K. Jana,
    Professor,Indian Institute of Technology (ISM),Dhanbad

    Jana
    Abstract: The world has witnessed a tremendous growth of Internet of Things (IoT). A huge number of devices are now connected through the Internet for providing instant services to the end users in various application domains such as in smart cities, autonomous vehicles, smart home devices, e-healthcare, map services and so on. For the last decade, cloud computing played an important role to provide such high demand services through various data centers that are built by Amazon, Microsoft, Google, IBM etc. However, cloud computing suffers from many limitations such as latency, traffic congestion, processing of massive data, and communication cost and this is mainly due to a large distance between the end users and the cloud data centers. Fog computing has emerged as an essential paradigm to resolve these challenges faced by the traditional cloud computing by extending facilities of storage, computation and communication toward the edge of a network. In the recent years, fog computing has become one of the most popular technologies that supports geographically distributed, latency sensitive, and QoS-aware IoT applications in both time and cost effective manner.
    In this talk, my presentation will begin with an introduction to fog computing by quoting the necessary background and motivation of this paradigm. Next, I will discuss about various proposed fog-based architectures with their merits and demerits. This will be followed by the most important algorithmic part of fog computing, i.e., task offloading in which a brief survey of various proposed algorithms will be presented. The session will be ended by applications and future research directions of fog computing.

  • III. Research Roadmap of Offloading in Federated Cloud, Edge and Fog Systems📑
    by Dr. Binayak Kar,
    Asst. Prof,National Taiwan University of Science and Technology (NTUST), Taiwan

    Dr. Binayak Kar
    Abstract: The Internet of things (IoT) devices that have taken the world by storm need computational power and storage capacity for the huge amount of data generated by them to provide the services to their subscribers. Currently, Cloud computing, Edge computing, and Fog computing are the potential paradigms that could fulfill the demand of these subscribers. Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. However, it introduces high communication latency as the cloud servers are far away from the end-users or subscribers. The communication latency limitations of some applications are very strict which makes the cloud computing paradigm unsuitable for them. This is where edge and fog computing models come into the play key role as they can provide similar services with lower latency as they are closer to users as compared to the clouds. However, both edge and fog have also certain limitations while providing services such as capacity, capability, and coverage. For the above-discussed issues, federation comes into play a key role to resolve them. As federation can help one service provider to extend its capacity, capability and service limitations to satisfy their users' demands. Again based on the users’ demand type of input request can be classified as ultra-low, low and loose latency tasks. Such tasks can affect various offloading scenarios between clouds, edges, and fogs. For example, when there is a capacity limitation on the edge, the edge can offload its loose latency tasks to the clouds. And in some cases, the cloud can offload the highly time-sensitive data to the edges to reduce the communication latency and cost. In this talk, we will discuss various federated systems considering clouds, edges and fogs, and related offloading scenarios in detail.

  • IV. Generating Pseudo-SQL Queries from Under-Specified Natural Language Questions📑
    by Dr. Fuxiang Chen,
    Senior Research Scientist in DeepSearch Inc., Korea/Singapore

    Dr. Fuxiang Chen
    Abstract: Generating SQL codes from natural language questions (NL2SQL) is an emerging research area. Existing studies have mainly focused on clear scenarios where specified information is fully given to generate a SQL query. However, in developer forums such as Stack Overflow, questions cover more diverse tasks including table manipulation or performance issues, where a table is not specified. The SQL query posted in Stack Overflow, Pseudo-SQL (pSQL), does not usually contain table schemas and is not necessarily executable, is sufficient to guide developers. In this talk, I will first introduce the problem of generating SQL codes from natural language questions (NL2SQL), illustrated with existing approaches. Then, I will describe a new NL2pSQL task to generate pSQL codes from natural language questions on under-specified database issues, NL2pSQL. In addition, two new metrics suitable for the proposed NL2pSQL task were defined, Canonical-BLEU and SQL-BLEU, instead of the conventional BLEU. With a baseline model using sequence-to-sequence architecture integrated by denoising autoencoder, the validity of our task is confirmed. Experiments show that the proposed NL2pSQL approach yields well-formed queries (up to 43% more than a standard Seq2Seq model).

Tutorials and Workshops
  • I. Tutorial:Energy-Efficient Cloud Computing📑
    by Dr. Sanjaya Kumar Panda,
    Asst. Prof,NIT Warangal

    Dr. Sanjaya Kumar Panda
    Abstract: The massive growth of cloud computing leads to huge amounts of energy consumption and release of carbon footprints as data centers are housed by a large number of servers. Consequently, the cloud service providers are looking for eco-friendly solutions to reduce energy consumption and carbon emissions. United States data centers consume approximately 91 billion kWh of electricity in 2013 and it is guesstimated to increase to 140 billion kWh by 2020 according to natural resource defense council report. As a result, task scheduling has drawn attention, in which efficient resource utilization and minimum energy consumption take into great consideration. This is an exigent issue, especially for the heterogeneous environment.In this tutorial, we will discuss energy-efficient task scheduling algorithms to address the demerits associated with task consolidation and scheduling problem.

  • II. Tutorial:Analysis of Keystroke Patterns with Machine Learning Approach📑
    by Professor (Dr.) Utpal Roy,
    Siksha Bhavana,Visva-Bharati,Santiniketan.

    Professor (Dr.) Utpal Roy
    Abstract: The latest trend in authenticating and identifying users is by using the potentiality of biometrics. Recently, keystroke dynamics on smart phones gaining popularity due to the sensors technology attached to a smart phone with all amenities are now available at low cost and it becoming more common and popular. It increases the performance in the accuracy and reliability of the model. As the increasing trends of using sensors technology create opportunities as well as challenges for developing the next version of keystroke dynamics systems. In this domain, ML is used in user recognition, user identity verification, and user personal traits prediction by measuring and analyzing the way of typing. In this discussion, a study on the keystroke dynamics biometrics has been exploited as a measure of user identification and authentication with the help of popular machine learning tools and techniques.

  • III. Workshop: Demystifying DevOps with Docker and Kubernetes📑
    by Prof Jyoti Prakash Sahoo,
    Asst. Prof,SOA Deemed to be University, Bhubaneswar.Odisha.

    Prof. JP SAHOO
    Abstract: The workshop is targeted at potential participants as a prelude to highly focused conference ICADCML-2020. This workshop endeavors to enables prospective participants to respond better to obsolescence in IT with DevOps as an appealing paradigm shift in Application Lifecycle Automation. DevOps is a set of practices that combines software development (Dev) and information-technology operations (Ops).DevOps architecture is used for the applications hosted on the cloud platforms and large distributed applications. This workshop followed by hands-on sessions will guide the participants through DevOps with Docker and Kubernetes. Docker is a high-end DevOps tool that allows building, ship, and run distributed applications on multiple systems, furthermore Kubernetes is a production-grade, open-source platform that orchestrates the placement (scheduling) and execution of application containers within and across computer clusters. It works with a range of container tools, including Docker. Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service.

Conference Sponsors
Technical Collaboration
Conference Management
Venue
SJT Gallery
VIT, Vellore-632014
Tamil Nadu, India
Publication Partners