Scala

SCALA

Turn your lakes of data into rivers of value with Scala. Build powerful and scalable parallel applications with successor of Java

Do you need to process huge volumes of data? Build applications that easily scale to any load you need and will run smoothly till the end of ages? Scala is the answer. Smart and concise code that is fully compatible with Java.

Scala saves you development costs

Scala means faster development with less lines written (e.g., 50% less than Java). Scala apps are cheap to scale for increased performance. Concise and readable code leads to higher productivity and faster testing. Functional programming means easy to debug and bulletproof code. Scala also offers high level abstraction to enable focusing on business logic and generating business value.

It is one language to build anything from huge Big Data ETL pipelines to glorious webpages.

Scala is compatible with Java:  you can still use Java code you already have seamlessly within a Scala app, so you can start right away and not lose any work that’s already done.

50 %

less code than Java.

Adastra has hands-on experience with Scala

We can offer you experienced team of 10+ Scala developers. We have practical experience from large scale projects in banking to telco industries. Thanks to our business oriented approach – code is just a mean to an end, it has to generate business value - we use our technical expertise to forward your business goals. Our code is clean, maintainable, documented, and above all, tested. Code coverage and performance tests are a must. Of course, we are glad to help you with initial development and following run. Initial project could be anything from a small-scale app with 2 developers to building a full SDK, big data platform, or a full ETL pipeline.

New trends that Scala and Adastra can help you with:

  • Spark
  • Big Data, HDFS, Hadoop
  • Real-time streaming
  • Kafka
  • NiFi
  • Akka
  • Parallel distributed applications
  • NoSql databases, e.g. Cassandra, HBase
  • Data Science
  • Machine Learning
  • Artificial Intelligence
  • Framework development
  • Docker
  • Kubernetes
  • DevOps

Success stories highlights:

Banking – transaction store

We have managed to build a scalable high throughput application on top of Cassandra database in under 3 months. Extensive use of future, modern Scala libraries and NoSql Cassandra database allow for unrivaled speed that can be used for anything from analytics to internet banking. This app easily scales to any amount/velocity of data simply by adding more cheap nodes to the cluster.

Banking – ETL offloading tool

Scala ingestion tool that is able to take any input format and store it cheaply and efficiently on Hadoop platform. It enables mirroring of current relational databases on Big Data platforms. This allows for extremely fast advanced analytics queries, short learning times of machine learning models and real time data streaming. This app is highly optimized to run 24/7 and transfers over 4TB a day in both directions.

Telecommunication – Big Data platform and anonymization framework

We have developed both batch and streaming ETL Spark pipelines. Thanks to the development of anonymization framework in Scala we were able to use the data for machine learning algorithms. We have built the whole solution from scratch, including the Big Data platform itself. Its capacity is now at 1PB of data storage, 1400 threads and 7TB of RAM. This platform and Scala ETL framework allow for advanced data analytics and machine learning models that means for example that prediction of customer behavior is 300% more successful than previous approaches.

Manufacturing – ETL and compaction pipelines.

We have developped ETL pipelines for more than 20 analytical projects. We used advanced Scala data compaction pipelines to boost effectivity of the Big Data platform and its storage capacity. Once again, the Big Data platform was built by us from scratch and is now used to integrate data from various relational sources and allow advanced analytics and machine learning over them.

Scala and Big Data Synergy

If you are thinking about tackling Big Data, Scala is the way to go because of several reasons:

  • Scala is the optimal language for building high-throughput, real-time ETL data pipelines in Spark. Scala also gives you all the latest features in Spark, without needing to wait while its API is modified for other languages.
  • Scala's functional approach is great for creating applications that run in parallel on each node of your Big Data cluster, optimally utilizing its resources.
  • Applications build in Scala are highly resistant to failures of individual nodes and can continue to process data even if most of your Big Data cluster is down. This way, you can be sure that the application is always running and no data are lost.
  • You can rely on all the latest cutting edge libraries for handling Big Data in Scala, as well as freely use any already written reliable Java code.
  • Scala with Big Data can net you anything from recommendation engine and machine learning modes to highly abstracted and simple to use data pipelines.

Would you like to use all the benefits of Scala? Contact us.

Thank you

We will contact you as soon as possible.

Tomáš Sedloň

Consultant

Tomáš Sedloň