
上QQ阅读APP看书,第一时间看更新
Loading data from the local filesystem
Though the local filesystem is not a good fit to store big data due to disk size limitations and lack of distributed nature, technically you can load data in distributed systems using the local filesystem. But then the file/directory you are accessing has to be available on each node.
Please note that if you are planning to use this feature to load side data, it is not a good idea. To load side data, Spark has a broadcast variable feature, which will be discussed in upcoming chapters.
In this recipe, we will look at how to load data in Spark from the local filesystem.
How to do it...
Let's start with the example of Shakespeare's "to be or not to be":
- Create the
words
directory by using the following command:$ mkdir words
- Get into the
words
directory:$ cd words
- Create the
sh.txt
text file and enter"to be or not to be"
in it:$ echo "to be or not to be" > sh.txt
- Start the Spark shell:
$ spark-shell
- Load the
words
directory as RDD:scala> val words = sc.textFile("file:///home/hduser/words")
- Count the number of lines:
scala> words.count
- Divide the line (or lines) into multiple words:
scala> val wordsFlatMap = words.flatMap(_.split("\\W+"))
- Convert
word
to (word,1)—that is, output1
as the value for each occurrence ofword
as a key:scala> val wordsMap = wordsFlatMap.map( w => (w,1))
- Use the
reduceByKey
method to add the number of occurrences for each word as a key (this function works on two consecutive values at a time, represented bya
andb
):scala> val wordCount = wordsMap.reduceByKey( (a,b) => (a+b))
- Print the RDD:
scala> wordCount.collect.foreach(println)
- Doing all of the preceding operations in one step is as follows:
scala> sc.textFile("file:///home/hduser/ words"). flatMap(_.split("\\W+")).map( w => (w,1)). reduceByKey( (a,b) => (a+b)).foreach(println)
This gives the following output:
