Unable to use an existing Hive permanent UDF from Spark SQL
Here the problem is version Spark 2.0 is not able to execute the functions whose JARs are located on HDFS.
With jar path pointing to a local edge-node path,define the function as a temporary function in Spark job . Then call the function in same Spark job.
CREATE TEMPORARY FUNCTION functionName as 'com.test.HiveUDF' USING JAR '/user/home/dir1/functions.jar'
In select function cannot be directly called.
Like oracle create some dumy table.
CREATE TABLE dual (dummy STRING);
In path ‘/path/to/textfile/dual.txt’ load data local to overwrite into table dual;
SELECT normaliseURL('value') from dual;
SELECT * from normaliseURL('value')