This is part 3 in a series on how to build a Hortonworks Data Platform 2.6 cluster on AWS. By now we have an edge node to run Ambari Server, three master nodes for Hadoop name nodes and such. Now we need worker nodes for processing the data.
Creating the worker nodes is not that much different from creating the master nodes. But the workers need more powerful nodes.
Creating the first worker node
Log in at Amazon Web Services again, in the same AWS district as the edge and master nodes. We start with one worker node and clone 2 more later on. Go to the EC2 dashboard in the AWS interface and click “Launch instance”. Then choose Ubuntu Server 16.04 from the Amazon Machine Images. Continue reading
If there is one thing I learned when becoming a data engineer, it’s that having just Hadoop expertise is probably not enough. For starters: what it means to be a data engineer is not exactly sharply defined. Some say data engineers are (Java) developers. Some place data engineers more at the operations side. And at some organisations data engineers work with any combination of these products: Hadoop, ElasticSearch, MongoDB, Cassandra, relational databases and even less hip products.
So I thought it would be a good idea to broaden my horizons. One product that is used quite often, is MongoDB. MongoDB is a NoSQL database. And if you don’t exactly know what that means, I think you will get the idea after viewing this video I made.
I tried Lion’s Mane from Four Sigmatic, which is branded as a cognitive enhancer. I’ve used it while studying Deep Neural Networks, amongst other things. I’ve done alternate weeks with and without Lion’s Mane and in my experience the effect is indiscernable.
Why cognitive enhancer?
I often listen to Tim Ferriss’ podcast (The Tim Ferriss Show). In it he often advertizes the wares of a company called Four Sigmatic. Apparently some of their mushroom coffees enhance cognitive abilities. That is of interest of me, because I’ve been studying a data science course on Coursera.org which had quite a lot of math and later I got a new assignment as a consultant to dive rather deep in the (Hadoop/Big Data related) Apache Atlas and Ranger products.
I’m 47 years old and math is certainly not part of my daily life. In fact I haven’t seen math that much since my bachelor study twenty years ago (besides Coursera courses). I’m also learning a lot of new open source products as data engineer. I can use all the cognitive abilities I can get. Continue reading
If you’ve worked with the Hortonworks Data Platform 2.x sandbox of later versions in VirtualBox and made it shutdown rather vigorously, you might have noticed that you won’t get past this startup screen when you try to start it up the next time:
I had this a couple of times and that’s why I decided to pause my sandbox every time and save it before shutting down my laptop. But yesterday Windows 10 decided to step in. After a day of studying it was high time for me to have dinner, during which I kept the laptop on. Little did I know that Windows 10 at that time decided to update and restart. And to do this, it needed to shutdown every application. Including VirtualBox. When I came back I found out to my horror that my carefully prepared HDP sandbox was shutdown in the roughest of ways. Thanks, Microsoft! Continue reading
Posted in Apache Products for Outsiders, Howto, Learning Big Data
Tagged Ambari, DataNode, HDFS, HDP Sandbox, Hive, Hortonworks, Horty, NameNode, Spark, VirtualBox
This is a tutorial on how to import data (with fixed lenght) in Apache Hive (in Hortonworks Data Platform 2.6.1). The idea is that any non-Hive, non-Hadoop savvy people can follow along, so let me know if I succeeded (make sure you don’t look like comment spam though. I’m getting a lot of that lately, even though they never pass my approval).
Currently I’m studying for the Hortonworks Data Platform Certified Developer: Spark using Python exam (or HDPCD: Spark using Python). One part of the exam objectives is using SQL in Spark. Along the way you also work with Hive, the data warehouse software in Hadoop.
I was following the free Udemy HDPCD Spark using Python preparation course by ITVersity. The course is good BTW, especially for the price :). But after playing along with the Core Spark videos, the course again used the same boring revenue data for the Spark SQL part. And I thought: “I know SQL pretty well. Why not use data that is a bit more interesting?” And so I downloaded the Minor Planet Center’s asteroid data. This contains all the known asteroids until at least yesterday. At this moment, that is about 745.000 lines of data. Continue reading
Last week I had a little fun with playing with Python, the pandas and matplotlib library and a JSON file with asteroid data. Here is what I did.
If you don’t know a lot about YARN and why it’s called a data operating system, you’re in luck. I found it necessary to explain how YARN works before I could explain the solutions for high availability.
At first YARN High Availability seemed like a different beast from HDFS High Availability. But when I read more about the topic I found out the solutions are actually very simular. Enjoy!