Apache Spark Scala UDF

I have created below SPARK Scala UDF to check  Blank  columns and tested with  sample table.
please find the transformation code below.
Before Applying Spark UDF                                                After Applying Spark UDF in Hive Table
Ok, Let see the code that does this work for us.

if we want to write the udf in scala while using spark, we need to import the following

once it is executed successfully, i have written a function that takes the value as an argument and checks whether it is blank or not , if it is blank it will substitute with the Value “NULL”
while working with hive, the data import into hive caused lot of blanks , which i wanted them to get replaced with NULL.
for each column in hive i used the udf function some thing like this .withColumn(“columnname”, udfname(c(“columnname”)))

To me it is very simple and easy to use udf written in Scala for spark on the fly.

let me write more udfs and share them  in this website, keep visiting….www.JavaChain.com

Thanks

Anto