Abstract:

Dynamic description of Aliyun e-MapReduce

E – graphs team

Version 1.3.3 (released)

  • Commercial release. Users can use the E-MapReduce service without application

Version 1.3.4 (in development)

  • Upgrade the JDK to 1.8
  • Upgrade Hadoop to 2.7.2
  • Add python2.7.1 and python3.4 versions
  • Add numpy library
  • Supports Presto, Phoenix, JStorm, and Oozie
  • Supports mixed deployment of Hadoop and Hbase
  • Support equipment rooms in Shenzhen and Shanghai

Version 1.4 (under development) :

  • User execution plan and cluster running status customized alarm

1.4.1 version

  • A dashboard of the overall cluster performance
  • Cluster status monitoring alarms

information

  • Li Jin, senior director of Ali Cloud, said that open source has four stages: embracing open source, giving back open source, integrating open source and giving back open source
  • With the continuous development of big data technology, data-assisted decision making will be more and more permeated into product development and business processes. Jobs in development, product, marketing, and business will become more “data sensitive,” enabling self-analysis. Ultimately, data serves business logic
  • Avoid three misunderstandings, teach you how to use data to improve transformation? In recent years, with the increase of market cost, extensive flow mode has been unsustainable, so the market gradually put forward to product design, product operation as the king of the concept. However, whether traffic is king, or product design and operation is king, its core is on transformation
  • These Apache big data projects have been quietly promoted to top level projects. Kylin, Lens, Ignite, Brooklyn, Apex projects have all been apache’s top projects lately. According to the needs of the market, the community is constantly incubating and growing good big data projects.
  • The 2016 Spark San Francisco Summit sidebar covers all aspects of Spark today
  • Spark compares storage memory and execution memory. Prior to version 1.6, Spark has a ratio that divides the entire heap. In version 1.6, this ratio can be dynamically adjusted during execution.
  • New hardware, especially large memory and Hadoop running in the cloud is the main direction in the future.