This is the second day of my participation in Gwen Challenge.

With the goal of reducing costs and time spent on new projects, cloud computing gains significant ground in computer science.

Before this proposal becomes a reality, significant investment is required in implementing data centers to support the processing and storage required for deletion projects that require significant computing resources. The concept of cloud computing is now a fine established reality. Many providers offer a range of services over the Internet with no real investment in major infrastructure. As a result, user ment in cloud environments was able to access a volume of restricted computing resources for a short period of time, and until recently, large organizations. The important economic differentiator in cloud computing is that payments occur, that is, customers only use resources efficiently and do not need to keep investing in allocating data center investments over the long term as they have in the past. Now if an organization wants to do a short survey that requires a cluster with hundreds of machines, it can just rent a cloud server for the time it takes to process it, and pay only a fraction of the total investment in infrastructure will be. In addition to economic benefits, this business model allows rapid response to sudden changes in the behavior of the application. At certain times of the year, such as Mother’s Day and Christmas, it’s common for large retailer websites to be overloaded, resulting in slow response times and failures. However, responding quickly to these issues can be the difference between a sale or loss of customer to a competing site. Cloud computing assists in preventing and solving such problems lem through elasticity, that is, increasing and decreasing according to some preset computing power parameters, such as CPU utilization. There are several service delivery patterns infrastructure (IaaS) and platforms (PaaS) to accomplish software (SaaS). However, cloud providers continue to perfect their service vices, including promoting convergence among themselves. As service delivery grew, the boundaries of the vision were redefined by NIST in 2011. Many vendors have renamed their services, especially SaaS, because the word software has a very broad meaning. Therefore, database as a Service (DBaaS) and other models such as Chinese learning as a Service (MLaaS) and Monitoring as a Service (MaaS) have emerged. In this case, we use the term XaaS in the text, which stands for “everything as a service” and represents several patterns els implemented by the provider. In this case, the limitations of using monolithic architectures hinder scalability gains. One way to mitigate this difficulty is to use cloud services such as FaaS to process system segments. FaaS is a model in which a service client loads a piece of code in ten specific languages that a provider provides to process from a service call, usually through a REST API. This model hides all the complexity involved in the infrastructure. However, leveraging the FaaS model is not trivial, as it requires the user to know the consumption provided by each provider of the form. In addition, developers must build applications that consider using this model, or invest considerable time in adapting applications that have been canceled in cooperation with FaaS. A framework for proposing the use of something abstracts complexity away and rerequires publishing code, as in Lambda, and simplifies the service consumption process. This way, developers can focus on the functionality of the application rather than the details of the provider. • Application of multiple development paradigms, such as functional and imperative; • Applications running on container lightweight services, such as Docker; • Decentralized continuous delivery; • Use DevOps culture (development integration and operations). Therefore, for the construction of software, it is necessary to pay attention to the development folding of application functions through the principle of microservices. Figure 1 shows a comparison of the microservice approach to the singleton approach.

The 2 function-as-a-service programming model has evolved over time with the goal of optimizing software and its maintenance funds, developing capabilities, and consistency ness with the business. Originally, software was developed in a holistic way, meaning that the entire application was contained in a single piece of software. This model results in highly coupled application components that communicate with each other, reducing maintainability. Today, it is common for system architects to sign off on their applications in a modular manner, separating block and grouping set-related functions of varying complexity. Microservice architectures have been proven to provide good scalability and improve software maintenance processes. In this case, the cloud provider has modeled a service that fits this approach, using FaaS. Has been using the term microserver in the Agile development community since 2014. Standards and principles of penetration Microservice concepts include: • Use RestFul apis; • Business oriented and native cloud-based development — option; In the holistic approach, the entire application is placed in a single process, and in the Mi-CroService approach, each function is placed in a different service. Thus, in a singleton approach the entire application needs to scale up, in this case in microservices, only those with higher DE – will scale up (providing greater reasonable resource consumption). Given that modules are often not used uniformly, meaning that each module has a different workload, a singleton model can be a waste of resources. Although some modules are not always used, they need to be instantiated to enable the most overloaded module to be enlarged. On the other hand, a different effect occurs in croService-oriented models, where only the most used modules will be effectively amplified. To meet infrastructure needs for microservices-based applications, some cloud providers have begun to offer functional submodels as services. In it, the client signs up to perform predefined functions, loads the desired code execution, and receives the server’s access address. Applications that use such cloud services have been referred to as serverless applications because the application does not have a specific server and its operations are based on a request to the provider’S API. • Automatic continuous elasticity; • Pay as needed. Other benefits include increased project flexibility and reduced risk. In these this fit, flexibility stands out because, in critical applications – high availability is an essential feature. In addition, with FaaS, management focuses on applications rather than infrastructure, freeing up resources and improving business logic. On the other hand, the adopted faAS-based model includes features that may require the ability to customize the operating system because it cannot be adjusted in the application. In addition, the first execution of the response delay function may occur at some unused time. This is due to the provider’s policy of leaving the instance participating in the link feature to power up the last call only a few minutes after startup. Another potential problem is the CPU, memory, and runtime limits set by the provider for quantity. Pull – the tendency to perform processing that needs to be induced by the network is another weak model of this. In addition, developers are implementing FaaS in adoption. Correctly cancelling understanding how services work can mean the difference between successful adoption and abandonment after a few minor failures using this approach. How – Once, to reach this threshold, there might have been enough of a learning curve to prevent the adoption of such a model. To this end, this article proposes a way to mitigate this difficulty by reducing the gap between developers and the best results by using automated FaaS services, which automatically and transparently transform the functionality of monolithic applications as services. 2.2 Public Providers of FaaS Currently, the major public cloud providers have FaaS services in their directories. The pioneer, Amazon, offers AWS Lambda. The service can handle Java, NodeJS, C#, and Python, and offers a free monthly processing package before charging. The vendor has several partnerships for deployment, monitoring, code management, and security. Google provides Cloud Functions services (Google, 2018). The service can handle functions written in Python and NodeJS, like Amazon’s Lambda, which provides an initial application quota and starts charging only when that quota is exceeded. Microsoft supports Azure Functions through its Azure Cloud service. The service itself allows you to load functions in C#, including using Microsoft’s own tools such as vi-sual Studio. The provider specifies that it is only available in a Windows environment through its portal functionality, although this is transparent to the user. Nevertheless, there is a prediction of the availability of environments using Linux.

As computing evolves, software architectures need to adapt to the needs of the applications in use. There was a time when huge machines took care of all the processing and software ran mainly on mainframes. As the popularity of personal computers spread, computing power available could be better used in these machines, and then software processing gradually became available in “fat customers”. The client-side programming language that stands out in this context is JavaScript. Originally used for small validations and simple interactions, the language has become standard today and several frameworks have leveraged it to increase the interactivity of applications on the Web. The Institute of Electrical and Electronics Engineers (IEEE) has released a ranking of the most commonly used programming languages in 2018, with JavaScript occupying the eighth position. Due to the success of JavaScript, NodeJS was introduced to the world by Rayan Dahl back in 2009. Not limited to a JavaScript framework and not disguised as a new programming language, NodeJs is defined as a JavaScript code execution environment. The main challenge is that NodeJS is intended to address the highly scalable requirements of today’s Web applications. To do this, it supports port asynchronously executing processes, pre-releasing other processes that are blocked waiting to respond to calls. NodeJs uses JavaScript to handle language side processing that the server side has integrated into the client side. With this, it adds a range of networks to meet the needs of potential users without using Velopers to overcome the new learning curve as it occurs in the normal process of learning new programs. In addition, it has strong support for MUnity, which provides a large library through its own package manager NPM (Node Package Manager).

Given the popularity of NodeJS, several systems have been built using this technology, which is why it was chosen for the proposed design framework. To use the available FaaS services, developers must understand the API details of each vendor, as well as be able to subdivide the functionality of the application and translate it into appropriate calls to the service structure. This can become cumbersome and discourage development operators from adopting faAS-based approaches. Many professionals can forgo the benefits of the cloud based on what the architecture can offer, such as high availability, flexibility, and cost savings. In view of the above situation, Node2FaaS framework is proposed in this work. An automatic conversion tool for singleton applications written using NodeJS to FAAS-oriented applications. The tool reads the target application’s raw code and con-converts it into functionality for each application formed on the FaaS service.