FAQs
About Above Computing
Are you a database company?
No. Above is a service that runs above mature SQL, NoSQL and Object storage technologies across cloud providers. It's a no code user experience layer that brings physical distributed servers together with virtual data modeling and logic technology. The result is a global, resilient data processing network that enables virtually anyone to be a high end "database arcthitect" without having to mess with, or manage, databases. It also gives users the ability to build powerful logic and analytics without coding.
Because we use proven technologies, Above is safe, reliable and predictable. Also, since we don't have to develop a database or other low-level technologies, all of our time is focused on bringing new features and better experiences to our users.
How does the logic work? Can I use my own code?
Above's no code logic is based in large part on control-flow principles. However, instead of coded statements that have no knowledge of the data being worked on, our approach is to use data-aware independent agents to execute logic.
Agents are comprised of commands and actions that can be clicked together. Agents can call each other to perform tasks and work in a group to accomplish objectives across the data fabric. They run in real-time based on events and data changes, or can operate on a timer to do periodic processing or assemble reports. They can also be called from the API to execute their mission(s). When agents are combined with Above's data modeling and processing options, developers can move beyond CRUD and workflow to execute performant real-time analytics across massive datasets.
While we obviously are big believers in the convenience and power of this approach, we do plan to support sandboxed execution of JS and other scripts/languages as we grow, and also allow you to call Lambdas or Cloudflare workers as needed.
How do I avoid lock-in?
We run at many of the cloud providers and locations that you may already have infrastructure at. You can extract data from Above into your database of choice via the API.
Within Above, you'll know where your data is stored in terms of cloud provider and location—so in many cases there are no bandwidth egress charges from your provider.
We anticipate adding robust data migration services via vendor partners as we grow.
What are parallel and in-memory processing?
Parallel and in-memory processing are two techniques to handle high load and intense computing situations.
Parallel processing is important for manipulating large data sets and getting a quick response. The approach is to break data sets into pieces and use synchronous logic to achieve some CRUD result faster, or transform some data set into higher level analytics without waiting hours or a day.
In-memory processing, on the other hand, is extremely useful for high throughput operations and calculations—where speed is essential or there is a high amount of simulataneous operations across many large, disparate data sets. By putting data into memory, you can do calculations in nanosecond timeframes while avoiding physical database read/write locking and performance issues.
With Above's approach, you can architect standard, edge, parallel and in-memory processing in the right ratios for your use case, goals and budget.
See it in action
Start prototyping something new today. No tech skills required!