Discover the Alumio architecture & performance!
Designed to maximize automation, flexibility, and responsiveness!
Alumio delivers a horizontally and vertically scalable high-performance, cloud-native infrastructure that acts as a central hub to govern and orchestrate integrated systems, data, and processes. It helps process thousands of transactions per second and supports thousands of hosted cloud-native Alumio environments.
Data packages as ‘in-process data’ are temporarily stored in our robust queuing system, depending on type transformation and chosen Alumio package into MySQL, Elastic, Apache spark, Google GCP, or Amazon’s Redshift.
They are used to guarantee processing at scale for all the individual pages of data in transit. If any system goes offline, the architecture above allows for elegantly pausing and resuming flow-processing activities without loss of data.
Alumio is built as a high-performance integration platform to help external applications to be connected, and to handle big data. Data is transformed into smaller packages called ‘Alumio tasks’ and can flow through our system in a scalable manner into externally connected applications via our API, supported by our robust queuing mechanism.
Data sent to the Alumio listener APIs is only acknowledged after temporarily being persisted to redundant data storage and successfully queued. This protocol allows external applications to be certain that their data will be processed by a flow, or that it needs to be resent.
Alumio can recognize expired or invalid API credentials and automatically take connection resources offline. When a connection goes offline, Alumio's monitoring recognizes failed tasks. Additional workflows can be created to pause all related integration flows that are in progress. New flows will then, not be scheduled, and the offline connection will be placed into an automated recovery procedure. Then, once the connection comes back online, all related integration flows will resume processing where they left off and new flows that did not run will be scheduled.
The Alumio has no practical limits within a SMB Alumio private cloud account regarding:
Number of applications that can be connected.
Number of flows that can be defined.
Number of flows that can run in parallel.
Number of records that can be processed.
The size of data that can be processed.
Alumio has a full DevOps team monitoring the Alumio platform 24/7. The DevOps team has people in multiple locations and each team member is fully equipped to work remotely or from an Alumio office.
The Alumio core team has defined a software development process to ensure that Alumio maintains scalability and reliability, and is 100% available. The SDLC (Software-Development Lifecycle) is the process that is followed for each Alumio software (component) project. Each project consists of a detailed plan describing how to develop, maintain, replace, and alter or enhance specific software. This methodology ensures the quality of the Alumio iPaaS.
Alumio develops and improves its applications by using sound software-development lifecycle (SDLC) practices such as:
Identifying vulnerabilities from outside sources to drive change and code improvement.
Applying hardware and software patches wherein Alumio is responsible for code changes and Amazon Web Services (AWS) is responsible for providing hardware patches; our virtual environment allows us to apply changes without any downtime.
Providing secure authentication and logging capabilities.
Removing development accounts, IDs, and passwords from production environments.
Adhering to strict change management practices for code updates as well as patches.
Separating test and development environments from production.
Maintaining separation of duties between development and support staff.
Ensuring Personal Identifiable Information (PII) is not used in test environments.
Performing regular code reviews and documenting code changes.
Engaging senior developer input and approval for all code changes.
Completing functionality and regression testing before release to production.
Conducting performance tests for every code component
Maintaining backout procedures to preserve high availability and integrity.
Following secure coding practices according to an SDLC policy and addressing the security training needs of the development team.
Referring to the Open Web Application Security Project (OWASP) to check for security flaws such as injection flaws, buffer overflows, cryptographic errors, error handling, etc.
Assessing for vulnerabilities on every release.
Conducting penetration testing every single year to identify points of improvement.
Lifecycle API management
A Symfony-based iPaaS
Don’t reinvent the wheel
Use the software in their strengths
Implementing a Hexagonal design