If there is one thing I learned when becoming a data engineer, it’s that having just Hadoop expertise is probably not enough. For starters: what it means to be a data engineer is not exactly sharply defined. Some say data engineers are (Java) developers. Some place data engineers more at the operations side. And at some organisations data engineers work with any combination of these products: Hadoop, ElasticSearch, MongoDB, Cassandra, relational databases and even less hip products.
So I thought it would be a good idea to broaden my horizons. One product that is used quite often, is MongoDB. MongoDB is a NoSQL database. And if you don’t exactly know what that means, I think you will get the idea after viewing this video I made.
I tried Lion’s Mane from Four Sigmatic, which is branded as a cognitive enhancer. I’ve used it while studying Deep Neural Networks, amongst other things. I’ve done alternate weeks with and without Lion’s Mane and in my experience the effect is indiscernable.
Why cognitive enhancer?
I often listen to Tim Ferriss’ podcast (The Tim Ferriss Show). In it he often advertizes the wares of a company called Four Sigmatic. Apparently some of their mushroom coffees enhance cognitive abilities. That is of interest of me, because I’ve been studying a data science course on Coursera.org which had quite a lot of math and later I got a new assignment as a consultant to dive rather deep in the (Hadoop/Big Data related) Apache Atlas and Ranger products.
I’m 47 years old and math is certainly not part of my daily life. In fact I haven’t seen math that much since my bachelor study twenty years ago (besides Coursera courses). I’m also learning a lot of new open source products as data engineer. I can use all the cognitive abilities I can get. Continue reading
If you’ve worked with the Hortonworks Data Platform 2.x sandbox of later versions in VirtualBox and made it shutdown rather vigorously, you might have noticed that you won’t get past this startup screen when you try to start it up the next time:
I had this a couple of times and that’s why I decided to pause my sandbox every time and save it before shutting down my laptop. But yesterday Windows 10 decided to step in. After a day of studying it was high time for me to have dinner, during which I kept the laptop on. Little did I know that Windows 10 at that time decided to update and restart. And to do this, it needed to shutdown every application. Including VirtualBox. When I came back I found out to my horror that my carefully prepared HDP sandbox was shutdown in the roughest of ways. Thanks, Microsoft! Continue reading
Posted in Apache Products for Outsiders, Howto, Learning Big Data
Tagged Ambari, DataNode, HDFS, HDP Sandbox, Hive, Hortonworks, Horty, NameNode, Spark, VirtualBox
This is a tutorial on how to import data (with fixed lenght) in Apache Hive (in Hortonworks Data Platform 2.6.1). The idea is that any non-Hive, non-Hadoop savvy people can follow along, so let me know if I succeeded (make sure you don’t look like comment spam though. I’m getting a lot of that lately, even though they never pass my approval).
Currently I’m studying for the Hortonworks Data Platform Certified Developer: Spark using Python exam (or HDPCD: Spark using Python). One part of the exam objectives is using SQL in Spark. Along the way you also work with Hive, the data warehouse software in Hadoop.
I was following the free Udemy HDPCD Spark using Python preparation course by ITVersity. The course is good BTW, especially for the price :). But after playing along with the Core Spark videos, the course again used the same boring revenue data for the Spark SQL part. And I thought: “I know SQL pretty well. Why not use data that is a bit more interesting?” And so I downloaded the Minor Planet Center’s asteroid data. This contains all the known asteroids until at least yesterday. At this moment, that is about 745.000 lines of data. Continue reading
Last week I had a little fun with playing with Python, the pandas and matplotlib library and a JSON file with asteroid data. Here is what I did.
If you don’t know a lot about YARN and why it’s called a data operating system, you’re in luck. I found it necessary to explain how YARN works before I could explain the solutions for high availability.
At first YARN High Availability seemed like a different beast from HDFS High Availability. But when I read more about the topic I found out the solutions are actually very simular. Enjoy!
I’ve been studying for a couple of hours how Hadoop high availability works, for the HDPCA exam. And now I’ve condensed that knowledge to a video on HDFS HA in just under 9 minutes. Enjoy!
Posted in Apache Products for Outsiders, Learning Big Data
Tagged DataNode, edits file, Fencing, fsimage, Hadoop, HDFS, High availability, JournalNode, NameNode, Split brain, ZKFC, ZooKeeper
Let’s talk about certification. The thing by which you try to show potential employers and customers that you actually know what you are doing at work. My only experience up to last Tuesday with IT product-related certifications was with Oracle’s Certified Professional program. I’ve been OCP for the database from 8i to 11g plus I’m 11g Database Performance Tuning Certified Expert. But all these exams were mainly multiple choice and to really test your knowledge the exams often contained some obscure stuff that you would rarely use. I’ll never forget the question about v$waitstat in one of these exams… well, I digress.
OCP wasn’t exactly embraced by all Oracle DBA’s either. A lot of experienced DBA’s saw it more as a way for inexperienced DBA’s to show they .. knew how to learn lots of facts about Oracle databases. Companies with lots of inexperienced DBA’s loved it, hoping that this would entice customers to invite their otherwise green “medior” DBA’s.
This is part 2 in a series on how to build a Hortonworks Data Platform 2.6 cluster on AWS. In part 1 we created an edge node where we will later install Ambari Server. The next step is creating the master nodes.
Creating the first master node
Make sure you are logged in Amazon Web Services, in the same AWS district as the edge node. To create 3 master nodes, we have to start with one. Once again we go to the EC2 dashboard in the AWS interface and click “Launch instance”. And again we have a choice of Amazon Machine Images and again we choose Ubuntu Server 16.04.