Mixing Kotlin and non-Java JVM languages

Hi All,

It looks like I may be needing to use a number of libraries written in Scala. While this should be fine since I’m just using their compiled jars, it made me a bit curious what the recommended way to use both Kotlin and Scala source within a project would be? My first guess would be to use the various Gradle plugins available for each language in a single Gradle build file, though I’m not particularly tied to Gradle.

There’s no reason for me to suspect this wouldn’t work, but the presence of SBT for Scala made me wonder if this is ideal, so I wanted to check in advance.

Thanks,
Brandon

1 Like

Unfortunately, you cannot mix Scala source and Kotlin source in one module. If you have several modules with dependencies, each in its own language, there shouldn't be any problem. Besides, may be, lots of Scala names and types not very much suitable for Java (and Kotlin) interoperability.

Do you have some particular Scala libraries in mind you with to use from Kotlin?

1 Like

Thanks - this answer was already very helpful for planning purposes.

The big one I plan to use is Apache Spark, but they seem to  have designed it with Java and Scala (and Python!) APIs in mind (I have no first hand  experience as yet though):
https://spark.apache.org/examples.html
However, to me the Java examples do seem much more painful to look at, in general. So it might be worthwhile to work through a few of these in Kotlin and see how it goes.

It seems that Twitter is doing a lot in Scala. I’ll very likely be using their HBC library from Kotlin, but this is actually written in Java (probably because it is for clients and not servers ;)).

As for their Scala libraries, I may use Cassovary and Twitter-Server, though I think my use in the latter case could be rather independent and could be happily accomplished by writing a little Scala code.

I wouldn’t be surprised if I find more but that’s all I know from this morning’s perusal.

I'm sure you would be the first one to actually try and build something with this kind of mix ;) Good luck and please share your experience, we are here to help.

There are several cases where you can't call Scala code from Java, the same will apply to calling Scala from Kotlin.

I never try to test it but calling Kotlin from Scala but I reckon that will be not a problem  

Could you please elaborate on specific cases?

In the top of my mind scala methods with implicit parameters.

Yes you can provide de implcit parameter as a normal parameter, but is a little clunky.

Code with Scala Collections is also kinda difficult.

I remember a case when I can call Scala Play code from Java Play code but I don’t remember the exact probem (probably implicit parameters)

1 Like

I started to give it a try.

To build, I just do gradle build, though if you want to skip the time consuming step of checking+producing the hadoop jar file, you can add -PskipHadoopJar.

If I run this:

“$JAVA_HOME/bin/jar” tf build/libs/spark-demo-1.0-SNAPSHOT.jar


I’ll see the following, which my understanding is that this is probably the Kotlin class I’d like to run:

com/cloudera/sa/SaPackage$SparkPi$e99fb4b3.class
com/cloudera/sa/SaPackage$SparkPiKt$02211b8a.class
com/cloudera/sa/SaPackage.class

But, now I show my inexperience with the JVM ... I'm not sure how to run it? I've tried variationsof the following:

java -cp build/libs/spark-demo-1.0-SNAPSHOT.jar:spark-demo-1.0-SNAPSHOT-hadoop.jar com.cloudera.sa SaPackage

It is apparent now I'll probably want to organize things a bit differently than is done in the scala code, otherwise I'll have main functions clashing in SaPackage.

Tomorrow, and after I hopefully figure out the running issue, I will try to actually use Spark a bit, but so far I’m happy I’ve even gotten this far without too much trouble.

Thanks,

Just a quick follow up, I have it running. Adding Cygwin to the mix always increases the fun:

java -cp spark-demo-1.0-SNAPSHOT-hadoop.jar;spark-demo-1.0-SNAPSHOT.jar com.cloudera.sa.SaPackage

More to come soon.

I don't even know enough Scala to know much about modules, but I did find the following which probably agrees with what you are saying:

In scala, there is a file like this:

package com.cloudera.sa

object SparkPi { ...

Now in Kotlin, in a completely different directory, I have:

package com.cloudera.sa.SparkPi

So these look like different package names, but nonetheless, there is a package clash in the end:

$ ~/gradle-2.4/bin/gradle build
:compileAvro
:compileKotlin
:compileJavawarning: [options] bootstrap class path not set in conjunction with -source 1.6
1 warning

:compileScala [ant:scalac] C:cygwin64homebrand_000spark-demosrcmainscalacomclouderasaSparkPi.scala:27: error: SparkPi is already defined as package SparkPi

[ant:scalac] object SparkPi {

[ant:scalac]   ^

[ant:scalac] one error found

So I end up using the cheap but afaik acceptable trick of throwing in ‘kotlin’ to the package name hierarchy, and I can now build and run it as in the Readme.md!:

~/gradle-2.4/bin/gradle runSpark -PsparkMain="com.cloudera.sa.kotlin.SparkPi.SparkPiPackage" -PskipHadoopJar -PsparkArgs="local[2] 100"

I believe I've found some possible snags, which seem to be related to Jorge's prior observations. I can't do import scala.math.random. While I can do import scala.math.random, I can't seem to call scala.math.random even so.

Another issue, how do we do reflection within main? My best attempt was probably this:

The most serious trouble relateds to numerous issues that occur in the spark.parallelize call I've attempted to bring over, and I believe this is an issue with interop with scala collections:

Here’s the Scala code

    // Run spark job
    val count = spark.parallelize(1 to n, slices).map { i =>
      val x = random * 2 - 1
      val y = random * 2 - 1
      if (x*x + y*y < 1) 1 else 0
    }.reduce(_ + _)

Here's my attempt at Kotlin:

    // Run spark job
    val count = spark.parallelize(1 to n, slices).map {
        val x = scala.math.random() * 2 - 1
        val y = scala.math.random() * 2 - 1
        if (x*x + y*y < 1) 1 else 0
    }.reduce{a: Int, b: Int -> a + b}

and here is what happens when I attempt to build and run

$ ~/gradle-2.4/bin/gradle runSpark -PsparkMain="com.cloudera.sa.kotlin.SparkPi.SparkPiPackage" -PskipHadoopJar -PsparkArgs="local[2] 100"
:compileAvro
:compileKotline: C:cygwin64homebrand_000spark-demosrcmainkotlincom.cloudera.saSparkPi.kt: (30, 23): Type inference failed: fun <T> parallelize(seq: scala.collection.Seq<T!>!, numSlices: kotlin.Int, `evidence$1`: scala.reflect.ClassTag<T!>!): org.apache.spark.rdd.RDD<T!>!
cannot be applied to
(kotlin.Pair<kotlin.Int, kotlin.Int>,kotlin.Int)

e: C:cygwin64homebrand_000spark-demosrcmainkotlincom.cloudera.saSparkPi.kt: (30, 37): Type inference failed. Expected type mismatch: found: kotlin.Pair<kotlin.Int, kotlin.Int> required: scala.collection.Seq<(???..???)>! e: C:cygwin64homebrand_000spark-demosrcmainkotlincom.cloudera.saSparkPi.kt: (30, 49): No value passed for parameter evidence$1 e: C:cygwin64homebrand_000spark-demosrcmainkotlincom.cloudera.saSparkPi.kt: (31, 28): Unresolved reference: random e: C:cygwin64homebrand_000spark-demosrcmainkotlincom.cloudera.saSparkPi.kt: (32, 28): Unresolved reference: random e: C:cygwin64homebrand_000spark-demosrcmainkotlincom.cloudera.saSparkPi.kt: (34, 14): Cannot infer a type for this parameter. Please specify it explicitly. e: C:cygwin64homebrand_000spark-demosrcmainkotlincom.cloudera.saSparkPi.kt: (34, 17): Cannot infer a type for this parameter. Please specify it explicitly. FAILED

FAILURE: Build failed with an exception.

I put this code on a new branch since it doesn't compile.

Any feedback welcome! Also, I’m not dead set on using Kotlin right now. If this isn’t the right time for Kotlin-Scala interop, that’s fine, I understand. I think I can still become a better Kotlin programmer by learning Scala a bit more anyway, though I think the syntax may just be similar enough to make it interesting to switch between the two!