Python: Create in Memory Zip File

Sometimes we need to create a zip file in memory and save to disk or send to AWS or wherever we need to save it. Below is basic steps to do this.

from zipfile import ZipFile
from io import BytesIO

in_memory = BytesIO()
zf = ZipFile(in_memory, mode="w")

#If you have data in text format that you want to save into the zip as a file
zf.writestr("name", file_data)

#Close the zip file
zf.close()

#Go to beginning
in_memory.seek(0)

#read the data
data = in_memory.read()

#You can save it to disk
with open('file_name.zip','wb') as out:
      out.write(data)

Hive Kerberos Installation

We are going to install Hive over Hadoop and perform a basic query. Ensure you install Kerberos and Hadoop with Kerberos.

This assumes your hostname is “hadoop”

Download Hive:

wget http://apache.forsale.plus/hive/hive-2.3.3/apache-hive-2.3.3-bin.tar.gz
tar -xzf apache-hive-2.3.3-bin.tar.gz
sudo mv apache-hive-2.3.3-bin /usr/local/hive
sudo chown -R root:hadoopuser /usr/local/hive/

Setup .bashrc:

 sudo nano ~/.bashrc

Add the following to the end of the file.

#HIVE VARIABLES START
export HIVE_HOME=/usr/local/hive
export HIVE_CONF_DIR=/usr/local/hive/conf
export PATH=$PATH:$HIVE_HOME/bin
export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:/usr/local/hive/lib/*
#HIVE VARIABLES STOP

 source ~/.bashrc

Create warehouse on hdfs

kinit -kt /etc/security/keytabs/myuser.keytab myuser/hadoop@REAL.CA
hdfs dfs -mkdir -p /user/hive/warehouse
hdfs dfs -mkdir /tmp
hdfs dfs -chmod g+w /tmp
hdfs dfs -chmod g+w /user/hive/warehouse

Create Kerberos Principals

cd /etc/security/keytabs
sudo kadmin.local
addprinc -randkey hive/hadoop@REALM.CA
addprinc -randkey hivemetastore/hadoop@REALM.CA
addprinc -randkey hive-spnego/hadoop@REALM.CA
xst -kt hive.service.keytab hive/hadoop@REALM.CA
xst -kt hivemetastore.service.keytab hivemetastore/hadoop@REALM.CA
xst -kt hive-spnego.service.keytab hive-spnego/hadoop@REALM.CA
q

Set Keytab Permissions/Ownership

sudo chown root:hadoopuser /etc/security/keytabs/*
sudo chmod 750 /etc/security/keytabs/*

hive-env.sh

cd $HIVE_HOME/conf
sudo cp hive-env.sh.template hive-env.sh

sudo nano /usr/local/hive/conf/hive-env.sh

#locate "HADOOP_HOME" and change to be this
export HADOOP_HOME=/usr/local/hadoop

#locate "HIVE_CONF_DIR" and change to be this
export HIVE_CONF_DIR=/usr/local/hive/conf

hive-site.xml

Chekck out this link for the configuration properties.

sudo cp /usr/local/hive/conf/hive-default.xml.template /usr/local/hive/conf/hive-site.xml

sudo nano /usr/local/hive/conf/hive-site.xml

#Modify the following properties

<property>
	<name>system:user.name</name>
	<value>${user.name}</value>
</property>
<property>
	<name>javax.jdo.option.ConnectionURL</name>
	<value>jdbc:postgresql://myhost:5432/metastore</value>
</property>
<property>
	<name>javax.jdo.option.ConnectionDriverName</name>
	<value>org.postgresql.Driver</value>
</property>
<property>
	<name>hive.metastore.warehouse.dir</name>
	<value>/user/hive/warehouse</value>
</property>
<property>
	<name>javax.jdo.option.ConnectionUserName</name>
	<value>hiveuser</value>
</property>
<property>
	<name>javax.jdo.option.ConnectionPassword</name>
	<value>PASSWORD</value>
</property>
<property>
	<name>hive.exec.local.scratchdir</name>
	<value>/tmp/${system:user.name}</value>
	<description>Local scratch space for Hive jobs</description>
</property>
<property>
	<name>hive.querylog.location</name>
	<value>/tmp/${system:user.name}</value>
	<description>Location of Hive run time structured log file</description>
</property>
<property>
	<name>hive.downloaded.resources.dir</name>
	<value>/tmp/${hive.session.id}_resources</value>
	<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
	<name>hive.server2.logging.operation.log.location</name>
	<value>/tmp/${system:user.name}/operation_logs</value>
	<description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>
<property>
	<name>hive.metastore.uris</name>
	<value>thrift://0.0.0.0:9083</value>
	<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
<property>
	<name>hive.server2.webui.host</name> 
	<value>0.0.0.0</value>
</property>
<property>
	<name>hive.server2.webui.port</name> 
	<value>10002</value>
</property>
<property>
	<name>hive.metastore.port</name>
	<value>9083</value>
</property>
<property>
	<name>hive.server2.transport.mode</name>
	<value>binary</value>
</property>
<property>
	<name>hive.server2.thrift.sasl.qop</name>
	<value>auth-int</value>
</property>
<property>
	<name>hive.server2.authentication</name>
	<value>KERBEROS</value>
	<description>authenticationtype</description>     
</property>
<property>
	<name>hive.server2.authentication.kerberos.principal</name>
	<value>hive/_HOST@REALM.CA</value>
	<description>HiveServer2 principal. If _HOST is used as the FQDN portion, it will be replaced with the actual hostname of the running instance.</description>
</property>
<property>
	<name>hive.server2.authentication.kerberos.keytab</name>
	<value>/etc/security/keytabs/hive.service.keytab</value>
	<description>Keytab file for HiveServer2 principal</description>  
</property>
<property>
	<name>hive.metastore.sasl.enabled</name>
	<value>true</value>
	<description>If true, the metastore thrift interface will be secured with SASL. Clients
	must authenticate with Kerberos.</description>
</property>
<property>
	<name>hive.metastore.kerberos.keytab.file</name>
	<value>/etc/security/keytabs/hivemetastore.service.keytab</value>
	<description>The path to the Kerberos Keytab file containing the metastore thrift 
	server's service principal.</description>
</property>
<property>
	<name>hive.metastore.kerberos.principal</name>
	<value>hivemetastore/_HOST@REALM.CA</value>
	<description>The service principal for the metastore thrift server. The special string _HOST will be replaced automatically with the correct host name.</description>
</property>
<property>
	<name>hive.security.authorization.enabled</name>
	<value>true</value>
	<description>enable or disable the hive client authorization</description>
</property>
<property>
	<name>hive.metastore.pre.event.listeners</name>
	<value>org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener</value>
	<description>List of comma separated listeners for metastore events.</description>
</property>
<property>
	<name>hive.security.metastore.authorization.manager</name>
	<value>org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider</value>
	<description>
	Names of authorization manager classes (comma separated) to be used in the metastore
	for authorization. The user defined authorization class should implement interface
	org.apache.hadoop.hive.ql.security.authorization.HiveMetastoreAuthorizationProvider.
	All authorization manager classes have to successfully authorize the metastore API
	call for the command execution to be allowed.
</description>
</property>
<property>
	<name>hive.security.metastore.authenticator.manager</name>
	<value>org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator</value>
	<description>
	authenticator manager class name to be used in the metastore for authentication.
	The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider.
</description>
</property>
<property>
	<name>hive.security.metastore.authorization.auth.reads</name>
	<value>true</value>
	<description>If this is true, metastore authorizer authorizes read actions on database, table</description>
</property>
<property>     
	<name>datanucleus.autoCreateSchema</name>     
	<value>false</value>
</property>

Hadoop core-site.xml

Notice here how it’s .hive. that is used with the storage based authentication.

sudo nano /usr/local/hadoop/etc/hadoop/core-site.xml

<property>
	<name>hadoop.proxyuser.hive.hosts</name>
	<value>*</value>
</property>
<property>
	<name>hadoop.proxyuser.hive.groups</name>
	<value>*</value>
</property>

Install Postgres 9.6

Follow this install for installing Psotgresql 9.6

sudo su - postgres
psql

CREATE USER hiveuser WITH PASSWORD 'PASSWORD';
CREATE DATABASE metastore;
GRANT ALL PRIVILEGES ON DATABASE metastore TO hiveuser;
\q
exit

Initiate Postgres Schema

schematool -dbType postgres -initSchema

Start Metastore & HiveServer2

nohup /usr/local/hive/bin/hive --service metastore --hiveconf hive.log.file=hivemetastore.log >/var/log/hive/hivemetastore.out 2>/var/log/hive/hivemetastoreerr.log &

nohup /usr/local/hive/bin/hiveserver2 --hiveconf hive.metastore.uris=" " --hiveconf hive.log.file=hiveserver2.log >/var/log/hive/hiveserver2.out 2> /var/log/hive/hiveserver2err.log &

Auto Start

sudo mkdir /var/log/hive/
sudo chown root:hduser /var/log/hive
sudo chmod 777 /var/log/hive

crontab -e

#Add the following
@reboot nohup /usr/local/hive/bin/hive --service metastore --hiveconf hive.log.file=hivemetastore.log >/var/log/hive/hivemetastore.out 2>/var/log/hive/hivemetastoreerr.log &
@reboot nohup /usr/local/hive/bin/hiveserver2 --hiveconf hive.metastore.uris=" " --hiveconf hive.log.file=hiveserver2.log >/var/log/hive/hiveserver2.out 2> /var/log/hive/hiveserver2err.log &

Now you can check the hive version

 hive --version

Hive Web URL

http://hadoop:10002/

Beeline

#We first need to have a ticket to access beeline using the hive kerberos user we setup earlier.
kinit -kt /etc/security/keytabs/hive.service.keytab hive/hadoop@REALM.CA

#Now we can get into beeline using that principal
beeline -u "jdbc:hive2://0.0.0.0:10000/default;principal=hive/hadoop@REALM.CA;"

#You can also just get into beeline then connect from there
beeline
beeline>!connect jdbc:hive2://0.0.0.0:10000/default;principal=hive/hadoop@REALM.CA

#Disconnect from beeline
!q

References

http://www.bogotobogo.com/Hadoop/BigData_hadoop_Hive_Install_On_Ubuntu_16_04.php
https://maprdocs.mapr.com/home/Hive/Config-RemotePostgreSQLForHiveMetastore.html
https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool#HiveSchemaTool-TheHiveSchemaTool
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-InstallationandConfiguration

Java IDE Installation for Eclipse

This tutorial will guide you through configuring Eclipse for Java. Ensure you have followed the tutorial on installing Eclipse first.

You should open java and debug perspectives. To do that just go to “Window”–>”Open Perspective”–>”Java”. This opens Java perspective. To open “Debug” you need to go to “Window”–>”Open Perspective”–>”Other”. Select “Debug” and hit “Ok”.

You should also install Maven 2. Go to “Help”–>”Install New Software”. In the “Work With” type “http://m2eclipse.sonatype.org/sites/m2e” and hit “Add”. Then add the name “Maven2” or whatever name you want and hit “Ok”. Then check “Maven Integration for Eclipse” and hit “Next”. Hit “Next” again for “Install Details” and accept the license agreement. and hit “Finish”. You will need to restart.

If you want you can also open “Project Explorer”, “Markers” and “Problems” views from “Window”–>”Show View”–>”Other”.

FindBugs is also a nice to have and I recommend having it :). Go to “Help”–>”Install New Software”. In the “Work With” type “http://findbugs.cs.umd.edu/eclipse” and hit “Add”. Then add the name “FindBugs” or whatever name you want and hit “Ok”. Then check “FindBugs” and hit “Next”. Hit “Next” again for “Install Details” and accept the license agreement. and hit “Finish”. You will need to restart.

You should open the “FindBugs” perspective as well. To do that just go to “Window”–>”Open Perspective”–>”Other”. Select “FindBugs” and hit “Ok”.

Don’t forget to lock Eclipse to launcher if you want.

Optional:

TypeScript IDE:

Used for React. It’s really handy. Open Eclipse. Click Help–>Install New Software. In the “work with” put “http://oss.opensagres.fr/typescript.ide/1.1.0/” and click Add. Follow the prompts and you are done for now. make sure you select “Embed Node.js” and “TypeScript IDE”.

HTML Editor:

Used for HTML files. Click Help –> Eclipse Marketplace. Search for “HTML Editor”. Then click install. After it is installed and Eclipse is restarted Click Window –> Preferences –> Web. Under “HTML Files” change encoding to “ISO 10646/Unicode(UTF-8). Under “Editor” add “div”. You can get more info and configuration from here.

MongoDB Testing

MongoDB is a NoSQL type of database.

Installation on Windows is rather simple just follow the prompts. Just download from here. You can follow the installation instructions from here if you so choose. It has a max document size of 16MB but you can see all base configurations from here. Once you install MongoDB you will have to start the server. Run C:\Program Files\MongoDB\Server\3.2\bin\mongod.exe. Then you can run C:\Program Files\MongoDB\Server\3.2\bin\mongo.exe to start querying the data.

MongoDB is a document db. Adhoc querying is easy if the data you are querying is within the 16MB max size. There is no tables in MongoDb they are called Collections instead. They are used to store bson documents. Which are basically json. Querying data is fast I have found. It uses the map reduce context to querying data. If you are not familiar with map reduce it’s basically just JavaScript.

I will have more later as I test more and more.

I have done some testing and can give you some basic code below.

Show All Collections:

 show collections

 

See All Databases:

 show dbs

 

Drop A Collection:

 db.COLLECTIONNAME.drop()

 

Drop All Collections:

 db.getCollectionNames().forEach(function(c) { if (c.indexOf("system.") == -1) db[c].drop(); })

 

Create A Collection:

 db.createCollection("logs", {autoIndexId: true})

 

Create An Index:

 db.logs.createIndex( { KEY: VALUE } )

 

MapReduce:

 db.COLLECTION.mapReduce(function(){
      var key = WHATEVER;
      var value = WHATEVER;

      emit(key, value);
},
function(key, values) {
      var output = {
            KEY: VALUE
      };
      return output;
}, {out: {inline: 1}})

 

Find matching element:
The .pretty() displays the json pretty.

 db.COLLECTIONNAME.find({ "KEY.SUBKEY.ARRAYINDEX" : VALUE }).pretty()

 

Find record that match in using greater and less than or equal to and display the id field associated.

 db.COLLECTIONNAME.find({ "KEY" : {$elemMatch: { $elemMatch: { $gt: 0.0005, $lte: 0.0005862095 } } } }, { id: "_id"  }).pretty()

 

Find record using IN and returning id field.

 db.COLLECTIONNAME.find({ "KEY" : { $in: [ 123,456 ] } }, { id: "_id" }).pretty()

 

You can loop through data and insert it into a new collection

 db.COLLECTIONNAME.find({ "KEY.SUBKEY.ARRAYINDEX" : 0 }).forEach(function(obj){ db.NEWCOLLECTION.insert(obj) })

 

CouchBase Testing

CouchBase is a NoSQL type of database.

Installation on Windows is rather simple just follow the prompts. Just download from here. Unfortunately at this time CouchBase on Windows wants you disable the firewall. I don’t recommend this and due to this critical issue itself do not currently recommend this until it has been fixed. Once installed it is viewable from http://localhost:8091. It has a max document size of 20MB but you can see all base configurations from here. CouchBase is a document db. It has fast importing of documents. CouchBase has views as like CouchDB. A view in CouchBase is like querying for data but not like CouchDB’s views. CouchBase still has Index’s. It’s view has a fast view rebuild which makes querying data faster than in CouchDB. It uses the map reduce context to creating views. If you are not familiar with map reduce it’s basically just JavaScript.

I will have more later as I test more and more.

I have done some testing with a view and can give you some syntax for writing one.

 function(doc, meta) {
      emit(meta.id, output);
}

CouchDB Testing

CouchDB is a NoSQL type of database. CouchBase is related but different entirely.

Installation on Windows is rather simple just follow the prompts. Just download from here. Once installed it is viewable from http://localhost:5984/_utils/. It has a max document size of 4GB but you can see all base configurations from here. CouchDB is a document db. If you come from a traditional relational setting then you might get frustrated at first because well querying documents is slow at first because you need to create a view. A view basically is kind of like an index from relational setting. Each different way you query the data you need to create a view. What get’s a little ridiculous is that as your data grows the view isn’t updated with newest data till the view is called. So at first it’s slow but each time it gets queried it gets faster. It uses the map reduce context to creating views. If you are not familiar with map reduce it’s basically just JavaScript.

I will have more later as I test more and more.

I have done some testing with a view and can give you some syntax for writing one.

 function(doc) {
      //Add any code you want
      emit(key, output);
}

 

PropertyGrid: CheckBoxList Property

This entry is part 5 of 5 in the series C#: Property Grid

Build a CheckBoxList Property Type

 

 

public class CheckBoxList : INotifyPropertyChanged
{
	private dynamic _values;
	private dynamic selectedItem;
	public event PropertyChangedEventHandler PropertyChanged;
	public event PropertyValueChangedEventHandler PropertyValueChanged;

	public CheckBoxList(String name, PropertyGrid pg)
	{
		PropertyName = name;
		PG = pg;
	}

	private String PropertyName { get; set; }
	private PropertyGrid PG { get; set; }

	public dynamic Values
	{
		get
		{
			return _values;
		}
		set
		{
			if (value != null)
			_values = value;
		}
	}

	public string ValueMember { get; set; }
	public string DisplayMember { get; set; }

	[Browsable(false)]
	public dynamic SelectedItem
	{
		get
		{
			return selectedItem;
		}
		set
		{
			String oldValue = selectedItem;
			selectedItem = value;
	
			if (PropertyChanged != null)
				PropertyChanged(this, new PropertyChangedEventArgs(PropertyName));

			if (PG != null)
			{
				if (PropertyValueChanged != null)
					PropertyValueChanged(this, new PropertyValueChangedEventArgs(PG.SelectedGridItem, oldValue));
			}
		}
	}

	public override string ToString()
	{
		return SelectedItem;
	}
}

PropertyGrid: CheckBoxList Editor

This entry is part 4 of 5 in the series C#: Property Grid

You may want to add a checkboxlist to your property grid. If you do then the below code is what you will need to create the editor.

Build the CheckBox List Editor

 public class CheckBoxListEditor : System.Drawing.Design.UITypeEditor
{
      private IWindowsFormsEditorService es = null;
      
      public override UITypeEditorEditStyle GetEditStyle(System.ComponentModel.ITypeDescriptorContext context)
      {
            return UITypeEditorEditStyle.DropDown;
      }

      public override object EditValue(ITypeDescriptorContext context, IServiceProvider provider, object value)
      {
            if (provider != null)
            {
                  // This service is in charge of popping our ListBox.
                  es = ((IWindowsFormsEditorService)provider.GetService(typeof(IWindowsFormsEditorService)));

                  if (es != null && value is CheckBoxList)
                  {
                        var property = (CheckBoxList)value;
                        var cl = new CheckedListBox();
                        cl.CheckOnClick = true;

                        Dictionary<String, Boolean> vals = new Dictionary<string, bool>();

                        foreach (KeyValuePair<String, Boolean> kvp in property.Values)
                        {
                              vals[kvp.Key] = kvp.Value;
                              cl.Items.Add(kvp.Key, kvp.Value);
                        }

                        // Drop the list control.
                        es.DropDownControl(cl);

                        if (cl.SelectedItem != null )
                        {
                              vals[cl.SelectedItem.ToString()] = !vals[cl.SelectedItem.ToString()];
                              property.Values = vals;
                              property.SelectedItem = String.Join(", ", (from v in vals where v.Value == true select v.Key).ToList());
                              value = property;
                        }
                  }
            }

            return value;
      }
}

PropertyGrid: DropDown Property

This entry is part 3 of 5 in the series C#: Property Grid

You may want to add a dropdown to your property grid. If you do then the below code is what you will need to create the property. You will also need the editor.

Build a DropDown Property Type

 public class DropDownList : INotifyPropertyChanged
{
      private dynamic _values;
      private dynamic selectedItem;
      public event PropertyChangedEventHandler PropertyChanged;
      public event PropertyValueChangedEventHandler PropertyValueChanged;

      public DropDownList(String name, PropertyGrid pg)
      {
            PropertyName = name;
            PG = pg;
      }

      private String PropertyName { get; set; }
      private PropertyGrid PG { get; set; }

      public dynamic Values
      {
            get
            {
                  return _values;
            }
            set
            {
                  if (value != null)
                        _values = value;
            }
      }

      public string ValueMember { get; set; }
      public string DisplayMember { get; set; }

      [Browsable(false)]
      public dynamic SelectedItem
      {
            get
            {
                  return selectedItem;
            }
            set
            {
                  String oldValue = selectedItem;
                  selectedItem = value;

                  if (PropertyChanged != null)
                        PropertyChanged(this, new PropertyChangedEventArgs(PropertyName));

                  if (PG != null)
                  {
                        if (PropertyValueChanged != null)
                              PropertyValueChanged(this, new PropertyValueChangedEventArgs(PG.SelectedGridItem, oldValue));
                  }
            }
      }

      public override string ToString()
      {
            return SelectedItem;
      }
}

PropertyGrid: DropDown Editor

This entry is part 2 of 5 in the series C#: Property Grid

You may want to add a dropdown to your property grid. If you do then the below code is what you will need to create the editor.

Build the DropDown Editor

 public class DropDownEditor : System.Drawing.Design.UITypeEditor
{
      private IWindowsFormsEditorService es = null;
      public override UITypeEditorEditStyle GetEditStyle(System.ComponentModel.ITypeDescriptorContext context)
      {
            return UITypeEditorEditStyle.DropDown;
      }

      public override object EditValue(ITypeDescriptorContext context, IServiceProvider provider, object value)
      {
            if (provider != null)
            {
                  // This service is in charge of popping our ListBox.
                  es = ((IWindowsFormsEditorService)provider.GetService(typeof(IWindowsFormsEditorService)));

                  if (es != null && value is DropDownList)
                  {
                        dynamic property = (DropDownList)value;
                        var list = new ListBox();
                        list.Click += ListBox_Click;

                        if (property.Values != null)
                        {
                              foreach (object item in property.Values)
                              {
                                    var propertyInfo = item.GetType().GetProperty(property.DisplayMember);
                                    list.Items.Add(propertyInfo.GetValue(item, null));
                              }
                        }
                        // Drop the list control.
                        es.DropDownControl(list);

                        if (list.SelectedItem != null && list.SelectedIndices.Count == 1)
                        {
                              property.SelectedItem = list.SelectedItem;
                              value = property;
                        }
                  }
            }

            return value;
      }

      void ListBox_Click(object sender, EventArgs e)
      {
            if (es != null)
            {
                  es.CloseDropDown();
            }
      }
}

How I Came To Restore a 1974 Mustang II FastBack

The dates in this retelling may be slightly off. It has been difficult to find tangable years/proof based on limited documentation I had during that time. I have tried attempting to track down image dates based off of negatives I had and even the printed photos didn’t have good information on them unfortunately.

The story starts at age 15 1995 with a red 1980 Mustang. As strange as that sounds. My friend’s brother had it in London, Ontario. We went up to see it and we had it towed back to Windsor, Ontario at a cost of 300$. Only later to find out it wasn’t worth fixing. Just way too much rust. So I decided to give away the inline 6 and send the car to the junk yard. But I had the restoration bug and my search continued to find a Mustang I wanted/needed to restore. And thus the restoration adventure of restoring a Mustang begins.

In 1996 I think I then found a 1975 Mustang II Coupe somewhere around Drouillard Rd & Tecumseh Rd. So I bought it and brought it home. I didn’t clear it with my parents just brought it home. I later had to get a car cover. In the search for parts I needed eventually led me to a garage somewhere around University Ave. W. and The Ambassador Bridge which I found from either an ad in the paper or word of mouth. I can’t recall. There we found a 1974 Mustang II completely gutted except for the drive shaft, axle and a V6 engine just sitting in it with no radiator, hose, etc with the windshield cracked and some rust on the hatch and firewall. I fell in love immediately. The garage where we found it was a mess. It was filled with boxes, etc all piled up and over the car. I talked with the owner and he said if I could get it out I could have it for free. We got it out and called the tow truck to bring the car to my house. When I got it home and parked it behind my 1975 Mustang II my parents were in shock that I brought yet another Mustang home. This was the third Mustang by this time but only two at my house. My mom said that one had to go behind the fence. So I pushed the 75 all the way to the back. We had a big yard and it took probably half hour to get it all the way back there. I decided right then that the 75 would be used as a parts car and I would restore the 74 as the body was in much better shape even though it had no interior, dash, gas tank, anything really but had a V6 engine sitting in it.

Finding parts was a very difficult task as Facebook wasn’t around. Me and my friend spent a long time trying to track down “usable” parts. We looked in the classified ads in the Windsor Star, back roads, making connections, going to junk yards. I eventually found this farm on the outskirts of Windsor that had I think four Mustang II’s in the yard. I knocked on the door and asked if I could buy parts off their car. They agreed but warned that there are bee hives in the cars and I would need to be careful. It took a while to get the parts I needed off the car’s due to how many bee’s were there flying around. To my success not once was I stung but it did take me a week or two to complete. I didn’t want to just get rid of the bee’s which would have made my job a heck of a lot easier. I went to a few junk yards and got parts. Some junk yards still had some of these cars though few and far between. Sometimes I had to travel an hour to the next junk yard just to find the car I needed to get parts from it. I eventually made two contacts one at Performance Parts Plus in Windsor and a backyard mechanic (Gary who drove a orange 70s Dodge Ram pick up). The guy at Performance Parts Plus helped get me the hard to find parts that couldn’t be found in Canada. He had them brought in from the US. He knew the right people.

I worked on the car in my driveway for month’s maybe even a year hard to recall. The guy at Performance Parts Plus got a line on a C4 transmission from a junk yard and got a torque converter for it. Although getting the bell housing was very difficult to find that would work with a V8 for my car. Eventually we tracked it down and we did get the V6 put in from the 1975. But eventually the Mustang was taken and V6 was removed and all the wires cut. I eventually tracked down the car and one night snuck out of my parents house and had a tow truck bring it back to my house. V6 was gone by this point but it didn’t matter much as it wasn’t a V8. It took a long time to find the parts I needed and to get the wire harness to be rebuilt. I had a radiator from the 75 and some engine stuff from that car but decided after that I would get a V8 and redo the engine bay with all high performance parts. I got a line on a 1988 302 roller motor from this other backyard mechanic’s garage on Walker Rd near Wyandotte. For a while the ’75 and ’74 Mustang both sat in the back where I got the engine from and I worked on it there instead of at my house. We eventually got the engine put in and I think after a few months I had both Mustangs’ 75/74 towed back home to my house. I then got a transmission cooler, Flex-a-Lite electric fan, aluminum radiator, MSD ignition, Edelbrock RPM Performer Intact, new gas lines ran, brake lines repaired, power steering rack and pinion, Holly carb 650 dual feed dual pump, original radio and oil pan from the guy from Performance Parts Plus. I got seats redone at Hai Ho Upholstery in Windsor. They did a fantastic job. I think that ran be 500$. I got rims and tires from Franks Tire & Wheels.

Eventually I started working full time in the Tool & Mould industry around 1998 and worked on the car in my spare time. I worked on the car during the day and went to work at night. Gary helped me get the car to a driveable state with the parts purchase from Performance Parts Plus. He had a garage on Tecumseh Road in Tecumseh near where Green Giant used to be. I kept the car there and he helped me restore it and find parts I needed. I paid him for his time. We did a lot of it together during the day then I went to work at night. Across the road from his garage was Coleman-Dillion that put a sump in my gas tank and restored it and got rid of the rust. They used dry ice to clear the gas tank of fumes so they could do welding. Gary (mechanic) had a Dodge Dart fully restored just beautiful. He helped get the car done as much as could be done at the time. He even did the body work and painted it. We used Wimbledon White. I had it appraised in 2000. I had it in one car show I think around 2001. It was down town Windsor on the water front. I didn’t really know where the car show was although I had heard of it. I ended up getting a joke award for only having one headlight. Although I haven’t been able to find that award unfortunately.

Eventually I decided to go back to school to become a Computer Programmer. The Tool & Mould industry at that time was not very stable. Was laid off a few times and I was a CNC Programmer so decided to do what I was passionate about anyway. I was working at Dimension Tool at this time working nights doing 80-100 hours a week, doing one class C++ at St. Clair College and then in my spare time working on the Mustang. My job at Dimension Tool only lasted 2 months as I couldn’t keep 80-100 hours a week going. I was burnt out! The Mustang was put away in storage at J’s Loc It on Manning Road in Windsor while I went to school from 2002-2005. As soon as school was finished I moved away and the Mustang stayed in storage. It stayed at J’s Loc It for a while after I was away but eventually moved to my uncle Dean’s shop and eventually to my Nonna’s garage. At some point while it was away in storage it was scrapped on the drivers side quarter panel and rear bumper. Not bad but required some body work which I would eventually get done. No idea where or when it happened just happened.

In 2014 my parents helped me bring the car to my home and I started working on it again. It was in very rough shape having sat from roughly 2001 to 2014 only having been started once or twice during that time as I was far away and couldn’t get back to start it. After I got it home it began the monumental task of documenting what is wrong with it. As it had trouble starting but did turn on. I then got onto Facebook and started looking for groups for Mustangs which I then stumbled on eight groups and I joined all of them. Got into contact with a guy named Phil Schmidt at Mustang II Speciality Shop New Used Obsolete 1974-1978 Ford Mustang II Parts who had the parts to fix my hatch and get it to close. I then made another contact Tom Porter and I have been buying parts from them ever since. I have replaced all the dash panels, all the hatch pieces, headlight bezels, headlights, new grill, new emblems and letters, vents for the dash, arm rests, wheel well chrome, marker light chrome, driver side fender brace, replaced window regulators, C4 transmission shifter rod, dome light, cat whispers and rear seat belts. I got a new air filter and chrome from Part Source. Other parts from Tom Porter. One of the most important things I was never able to do back in 2000 was to tie the frames. I got them from Stumpy’s Fab Works. Vinnie’s Mr. Fix It helped me get the car back running and put the frame ties on and cleaned up the engine bay, repaired the gas leak, oil leak, relocated the gas filter and got the gas gauge back working again.

In 2015 I had the transmission redone by Gino’s Automatic Transmission Ltd and Vinnie’s Mr. Fix It. Vinnie’s also helped get the shifter rod put in. I did a lot of interior work, chrome work, headlight work for the parts bought in 2014 winter. I also had Rick at Racetrack AutoGlass & Trim redo the interior door panels.

In 2016 I had the exhaust redone by Carlane Auto Centre. I had the rear hatch, driver side door, rear bumper and front of hood redone by Sutton Auto Collision. Vinnie’s helped me move the custom volt, oil and temp gauges from above my gas gauge to where the ash tray was. New weather stripping was put on the doors and hatch. The car still had trouble running battery draining was always an issue. Eventually I got into contact with Mike at Auto Electric MD. He helped me rewire the entire engine bay and dashboard. We had so many issues! Most of the wire was corroded or connected improperly. The voltage regulator only had 3 out of the 4 wires connected. The 4th wire is what was missing so the alternator would charge the battery. Wire was overused and had to be completely cut out. Headlight wiring all redone and corrected, MSD ignition corrected, voltage regulator and solenoid moved to provide the shortest possible route for power. By the time we were done the engine bay we had probably five pounds of unused wire/garbage. New starter, alternator and voltage regulator was needed as the starter cracked when trying to unbolt it. It was honestly a miracle the car even ran and that it didn’t catch fire. The dash area needed lots of help as well. My tach was missing the tach reducer on the back. The connector that connected to the back of the dash cluster was broken and needed replacing which is why my gauges would stop working off and on. The tach at this point still doesn’t work because we need a MSD Tach Adapter 8920. Then through my buddy Vinnie got added to the Facebook group Ontario Classic Cars and he linked me to a 1978 Mustang II that was being parted out in Hamilton area. Me and my kids Lucien and Sebastian went to see the car and I was able to find a center console, tail light wire harness, chrome, misc other parts to finish my car. The center console needs repainting and chrome shined up. Vinnie sand blasted a part that was full of rust for me and it looks amazing now. That was the part that holds the engine harness to the firewall.

In October 2016 I had a 74 Mustang II with Horse made from 16ga #4 brushed stainless steel from DK Custom Welding and Design. It measured 24″ long x 6.5″ high.

The plan for next summer is to shorten the gas line to the carburetor, put in the MSD Tach Adapter, possibly fix the crack in hood, fix the leak just below the driver side glass. Put on the engine harness that holds it to firewall.

In 2017 I tried starting the Mustang and it ran but leaked out coolant. I brought it over to Vinnie’s Mr. Fixit. I had the gas line going from the carburetor replaced. I had a starter cover put on over top the starter to protect it from heat. We replaced all the hose in the engine bay with brand new. When we got to the water pump we found out the water pump I am using is D00E-D. Which is from a 1970 engine. We still don’t know which ones will fit besides D00E-D and we are trying to track down what the engine code I am running is. But it turns out I need a very specific water pump with very little room to maneuver. We are still looking for the right pump.

On April 9 we will be heading out to a junk yard with Mike from Auto Electric MD. He knows where a few of my car are. I also want to see if they have a water pump that I need there. Unfortunately due to unforeseen circumstances we were unable to make this outing however he got me a new chrome alternator and I had it put on during the water pump work at Vinnie’s and it looks amazing. I decided to also get the exhaust headers wrapped to contain heat, put a starter  cover on and had the oil pan dipstick properly mounted.

In June 2017 lots of unforeseen work occurred. My timing cover started leaking coolant, had the center console redone by Brett McHugh, gas/brake pedal replacement, MSD Tach Adapter 8920 put on. I also had the front entirely redone by Sutton Auto Collision in Guelph and it looked absolutely amazing. Can’t wait to get the hood fixed but that might have to wait a month or so. Went to McLeans Auto Wreckers with Mike that I was supposed to go to back in April but couldn’t. It was amazing saw a lot of Mustangs and roughly 5 from 74-78.

In August I had Vinnie’s Mr Fix It fix my carb. The choke died and had quit a few vacuum leaks. The car was running really good.

Was able to bring my Mustang out to Guelph Rib Fest 2017 in August. Was a blast and it was great to get the car out again. Just days before I wasn’t sure if I would make it due to my carb having a really high fast idle speed. It took three days but I was able to bring it back down ad the car was drive-able once again. I was only able to go one day due to issue with oil and coolant possibly leaking together. Needs investigation currently. Next summer project most likely.

In September I brought my car over to Sutton Auto Collision in Guelph to have the hood crack fixed. I am very excited that it will finally be fixed. Just brings me steps closer to finishing!

In September Mike from Auto Electric MD found me a Holley 600 Single Feed vacuum secondaries. This week I hope to put it in. But we will see.

I got into contact with JD/Brenda from All Classic Motors Ltd. for the final parts I am missing for my car. I am very excited that they have them and soon that will be done. Stay tuned for further updates regarding this. I received the last marker light chrome for the rear passenger side and the speaker covers. Can’t wait to get them installed. Installed the rear passenger side marker chrome and it looks amazing! The speaker covers might be difficult to get on.

The Holley will have to be put on next year.

In 2018 I got the rear view mirror replaced by Richard (Vinnie @ Vinnie’s Mr. Fix-It! got me in contact with Him). I also got the proper hatch latch from Phil Schmidt at Mustang II Speciality Shop New Used Obsolete 1974-1978 Ford Mustang II Parts because my hatch wouldn’t close right. I installed it and now the hatch closes perfectly. I also worked with Mike from Mike’s Auto Electric MD to get all the following done.

  • Replaced the intake and timing cover gasket
  • Put a new thermostat and gasket in (165°)
  • Replace the bypass hose
  • Change oil
  • Put on the Holley 600 carb
  • Replace the fuel pump
  • Replace the fuel hose to the carb
  • Fine tune the carb
  • Replace the throttle bracket
  • Modified the air breather base
  • Relocate alternator and replace mounting brackets
  • Machine and modify power steering bracket
  • Replace transmission dip stick
  • Fix the transmission lines to transmission cooler
  • Replace the transmission fluid

Now the car runs amazing once again.

This year (2019) I will be replacing the tires, putting the speaker covers on, getting speakers, hooking up the radio and see if we can get the indoor feet lamps running.

In April 2019 Mike from Auto Electric MD got me in touch with Cor’s Tires & Accessories in Guelph and we got new tires put on the Mustang. They did an amazing job. We put on BF Goodrich Radial T/A P205/70R14 on the front and P245/60R14 on the back. They event put on a chrome valve stem.

In may 2019 Mike from Auto Electric MD mounted the speaker covers for me. They look amazing!

In August 2019 Mike from Auto Electric MD got the speedometer fixed. Now I actually know how fast I am going.

In July 2020 Mike from Auto Electric MD did some final work on the Mustang before I moved and got married to my amazing Wife :).  Although 2020 had it’s share of challenges. The radio just had static before so it now works and gets radio stations. The lower and upper control arm bushings were done. The radius arm bushings were also done. Anti rattle clips were done on the front brakes.

In October 2020 we moved to Amherstburg. I started working on the Mustang again in March 2021. When I went to start the car it was leaking a lot of gas. So I brought it over to Thrasher Sales & Leasing Ltd. The issue was my carb. They did a rebuild on it. They also fixed quite a few other leaks from my axle and transmission. It all works now with no leaks.

In May 2021 I decided to start cleaning up my interior. I got #8 and #10 self tapping screws 3/4″ to 2″ in length from Rona. Then replaced all the interior screws with the same screw. I also bought black screw caps to make it look cleaner so you don’t see screw heads. I also bought new floor mats from Auto Custom Carpets. They were 801 Black Cutpile with #110 Mustang Silver Running Pony Logo. They look fantastic in the car.

Reference Links:

Postgres: PgAgent Installation

This describes the necessary steps to install PgAgent on Ubuntu 14.04.

PgAgent:

  • apt-get install pgagent
  • -u postgres psql
  • CREATE EXTENSION pgagent;
    • Check it exists by running select * from pg_extension;
  • Edit the .pgpass file and add the connections you will need.
    • If you don’t know how to add the .pgpass file see adding pgpass below.
  • Execute the following command
    • pgagent hostaddre=localhost dbname=postgres user=postgres
  • You can also follow https://www.pgadmin.org/docs/1.8/pgagent-install.html as well for installation.

.pgpass:

  • touch .pgpass
    • Put it in home directory.
    • It contains information on connections.
      • hostname:port:database:username:password
  • Modify permissions for owner read write only
    • chmod 600 .pgpass

Postgres: Setup

If you want to setup a database server for Postgresql 9.6 these are the basic steps that should be done when working with Ubuntu 16.04.

Installation:

  • sudo apt-get update
  • sudo apt-get upgrade
  • sudo reboot (if necessary)
  • sudo add-apt-repository “deb https://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main”
  • tee -a /etc/apt/sources.list.d/pgdg.list
  • wget –quiet -O – https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add –
  • apt-get upgrade
  • sudo apt-get install postgresql-9.6
  • cd /var/
  • mkdir pg_log (This is where the logs will be stored)
  • chmod o+wxr /var/pg_log/
  • cd /etc/postgresql/9.6/main
  • nano postgresql.conf:
    • CONNECTIONS AND AUTHENTICATION
      • set “listen_addresses” = ‘*’
      • Set “max_connections” to 250 (or a reasonable number that works for your setup)
      • logging_collector = on
      • log_directory = ‘/var/pg_log’
      • log_file_mode = 0506
    • AUTOVACUUM PARAMETERS
      • autovacuum = on
        • Remove #
      • autocavuum_analyze_threshold = 100
      • autovacuum_vacuum_threshold = 100
      • track_counts = on
  • nano pg_hba.conf
    • host all all 0.0.0.0/0 md5
    • local all postgres trust
  • systemctl enable postgresql
  • systemctl start postgresql
  • /etc/init.d/postgresql restart
  • -u postgres psql template1
  • ALTER USER postgres with encrypted password ‘SET A PASSWORD’;
  • \q
  • -u postgres psql
  • CREATE EXTENSION adminpack;
    • To ensure it was installed correctly run “select * from pg_extension;”
  • If you want to view the postgresql service to find what directories are currently being using then run “ps auxw | grep postgres | grep — -D”

Create DB:

  • -u postgres psql
  • CREATE DATABASE ##DATABASE NAME##;
    • First run \l to check that DB doesn’t exist already.
  • CREATE ROLE ##ROLE NAME## LOGIN ENCRYPTED PASSWORD ‘##PASSWORD##’;
  • GRANT ##REQUIRED## PRIVILEGES ON DATABASE ##DATABASE NAME##TO ##ROLE NAME##;

Useful PSQL Commands

  • \l = list database
  • \q = quit
  • \c DBNAME = switch to db
  • \dt = list tables
  • ALTER DATABASE name OWNER TO new_owner
  • ALTER TABLE test OWNER TO test;

Formatting and Readability

This is the guest editorial I put on SQL Server Central.

By Oliver Gaudreault, 2010/06/18

Today we have a guest editorial from Oliver Gaudreault.

Too often I have seen stored procedures, functions and general SQL scripts that are indecipherable. What does this mean? It means that you will be forced to spend time making the code readable if you are required to debug or investigate code. Many times before I have run across someone who edited an IF statement thinking it was in a block but wasn’t. The code in the IF statement applied to everything and the result was not what the original business logic intended. Proper structure and form of the code would remedy this problem.

Using the following would help to make the code more readable:

Ensuring all flow control statements start and terminate appropriately.
Indentation is properly structured.
Ensure that all flow control statements (such as IF, WHILE, etc) terminate appropriately. For flow control statements that are inline you don’t necessarily need a BEGIN and END to the flow control statement. However in my experience I believe it is necessary.
Not only does this methodology apply to SQL but I believe it also applies for any modern programming language. Some may think that by making your code readable that it might render you replaceable. I beg to differ! I want to see people programming who are versatile. Programmers, who can quickly and easily find and repair an issue with little wasted effort. Lack of time should not equal bad practices. Good practises should equal more time.

This is, of course debatable and I would like to know others thoughts on this?

Dynamic Property Grid

This entry is part 1 of 5 in the series C#: Property Grid

There are a bunch of resources around for adding property grid to your .NET 4.5 project however I often find that they are really bits and pieces of the solution and very rarely an entire overview. There have been examples of using a class as your property grids object. But what if you are working with dynamic data in which a static class may not be the answer? I will provide an example below of just such a case which includes a parent type static class as well. If you have any questions please feel free to ask.

The first step is to create our type builder and to do that we must create the assembly, builder and module.

 AssemblyName assemblyName = new AssemblyName("ASSEMBLY_NAME_YOU_WANT");

AssemblyBuilder assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(assemblyName, AssemblyBuilderAccess.Run);

ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule"MODULE_NAME_YOU_WANT");

Here we need to define our type. Give it any name you want and what types it will extend. The third option is optional. If you have a parent type that you also want to add to the dynamic type this is where you do it.

 TypeBuilder typeBuilder = moduleBuilder.DefineType("TYPE_NAME_YOU_WANT", TypeAttributes.Public | 
                              TypeAttributes.Class | TypeAttributes.AutoClass | TypeAttributes.AnsiClass | 
                              TypeAttributes.BeforeFieldInit | TypeAttributes.AutoLayout, 
                              Type.GetType("NAMESPACE.CLASSNAME"));

If we used the parent type this is what it could look like. Up to you really.

 namespace AwesomeProject.PropertyGrid
{    
	public class MyName
	{
		public MyName() { }

		[Browsable(true)]
		[Category(##MYCATEGORY##)]
		[ReadOnly(false)]
		[Description(##MYDESCRIPTION##)]
		public DropDownList MyAwesomeProperty 
		{
			get;
			set;
		}
	}
}

If the property in your parent type is using a custom editor than you need to add it using the following.

 [Editor(typeof(CustomEditor), typeof(UITypeEditor))]

Here is where we need to dynamically add to our type builder. You probably have some sort of call to database which returns your options. So just loop through them and add to the builder.
Create the private field. Field Type can be anything. String, DateTime, Int32, Double. It could even be a type you create which I will explain later.

 FieldBuilder fieldBuilder = typeBuilder.DefineField(##FIELDNAME##, ##FIELDTYPE##, FieldAttributes.Private);

Now we create the public property.

 PropertyBuilder propertyBuilder = typeBuilder.DefineProperty(##FIELDNAME##, System.Reflection.PropertyAttributes.None, ##FIELDTYPE##, new[] { ##FIELDTYPE## });

Next we need to define the required set of property attributes for this property we created.

 MethodAttributes propertyAttributes = MethodAttributes.Public | 
                  MethodAttributes.SpecialName | 
                  MethodAttributes.HideBySig;

This part is really important. Let’s say we created our own type. Like a dropdown list or check box list. We need to define is TypeDescriptor here.

 TypeDescriptor.AddAttributes(typeof(##CLASS##), new EditorAttribute(typeof(##CUSTOMEDITOR##), typeof(System.Drawing.Design.UITypeEditor)));

Next we define the getter.

 MethodBuilder getter = typeBuilder.DefineMethod("get_" + ##PROPERTYNAME##, propertyAttributes, ##FIELDTYPE##, Type.EmptyTypes);

ILGenerator getterIlGen = getter.GetILGenerator();

getterIlGen.Emit(Oppres.Ldarg_0);

getterIlGen.Emit(Oppres.Ldfld, fieldBuilder);

getterIlGen.Emit(Oppres.Ret);

Next we define the setter.

 MethodBuilder setter = typeBuilder.DefineMethod("set_" + ##PROPERTYNAME##, propertyAttributes, null, new Type[] { ##FIELDTYPE## });

ILGenerator setterIlGen = setter.GetILGenerator();

setterIlGen.Emit(Oppres.Ldarg_0);

setterIlGen.Emit(Oppres.Ldarg_1);

setterIlGen.Emit(Oppres.Stfld, fieldBuilder);

setterIlGen.Emit(Oppres.Ret);

Bind the setter and getter to the property builder

 propertyBuilder.SetGetMethod(getter);
propertyBuilder.SetSetMethod(setter);

At this point we can also set the category attribute.

 propertyBuilder.SetCustomAttribute(new CustomAttributeBuilder(
typeof(CategoryAttribute).GetConstructor(
new Type[] { typeof(string) }), new object[] { ##CATEGORYNAME## }));

At this point we can also set the description attribute.

 propertyBuilder.SetCustomAttribute(new CustomAttributeBuilder(
typeof(DescriptionAttribute).GetConstructor(
new Type[] { typeof(string) }), new object[] { ##DESCRIPTION##}));

At this point we can also set the read only attribute.

 propertyBuilder.SetCustomAttribute(new CustomAttributeBuilder(typeof(ReadOnlyAttribute).GetConstructor(new Type[] { typeof(bool) }), new object[] { ##ISREADONLY## }));

Next we need to create and instantiate the dynamic type.

 Type type = typeBuilder.CreateType();
dynamicType = Activator.CreateInstance(type, new object[] { });

Here is where we can set the values for each property we added.

 foreach (PropertyInfo pi in properties)
{
      pi.SetValue(dynamicType, ##THEVALUE##, null);
}
PROPERTY_GRID.SelectedObject = dynamicType;

Side Notes:

If you want to change readonly attributes adding all the fields and before rendering to the property grid you can do that after the Activator.CreateInstance line.

 ReadOnlyAttribute attrib = (ReadOnlyAttribute)TypeDescriptor.GetProperties(dynamicType)["PROPERTY_NAME"].Attributes[typeof(ReadOnlyAttribute)];
FieldInfo isReadOnly = attrib.GetType().GetField("isReadOnly", BindingFlags.NonPublic | BindingFlags.Instance);
isReadOnly.SetValue(attrib, true);


If you want o collapse all categories there is a property on the property grid for that but there isn’t one for selectively only having one category show with the remaining closed. I can’t recall the url where I got some assistance on this so if anyone knows it please link it in the comments.

 GridItem root = pgRequest.SelectedGridItem;

if (root != null)
{
      do
      {
            root = root.Parent;

            if (root == null)
                  break;
      } while (root.GridItemType != GridItemType.Root);

      if (root != null)
      {
            foreach (GridItem category in root.GridItems)
            {
                  //really you can do anything you want here this is just an example.
                  //The important part is the Expanded = false.
                  if (category.Label == "CATEGORY_TO_CLOSE")
                  {
                        category.Expanded = false;
                        break;
                  }
            }
      }
}

Java: Embed Python

Let’s say you want to embed Python code in your application. You will use Jython.

pom.xml:

<dependency>
      <groupId>org.python</groupId>
      <artifactId>jython-standalone</artifactId>
      <version>2.7.0</version>
</dependency>

*.Java

import org.python.util.PythonInterpreter;
import org.python.core.*;

private static PythonInterpreter interpreter = new PythonInterpreter();

interpreter = new PythonInterpreter(null, new PySystemState());

//You put the key to register in JavaScript and pass the variable in
interpreter.set(KEY, VARIABLE);

//If you wanted to use this in the mapper as an example you would pass the key, value and context to the JavaScript function. That way when you write to the context in Python it writes it to the applications context.

interpreter.exec(PYTHONCODE);

Java: Embed JavaScript

Let’s say you want to embed straight JavaScript code in your application.

pom.xml:

 <dependency>
      <groupId>org.mozilla</groupId>
      <artifactId>rhino</artifactId>
      <version>1.7.7.1</version>
</dependency>

*.Java

 import org.mozilla.javascript.*;

private static org.mozilla.javascript.Context cx = org.mozilla.javascript.Context.enter();

private static ScriptableObject scope = cx.initStandardObjects();

private static Function fct;

//You put the key to register in JavaScript and pass the variable in
scope.put(KEY, scope, VARIABLE);

cx.evaluateString(scope, JAVASCRIPTCODE, "script", 1, null);

fct = (Function)scope.get(METHOD_IN_CODE, scope);

Scriptable mapper = cx.newObject(scope);

//If you wanted to use this in the mapper as an example you would pass the key, value and context to the JavaScript function. That way when you write to the context in JavaScript it writes it to the applications context.
fct.call(cx, scope, mapper, new Object[] {key, value.toString(), context});

Build a Java Map Reduce Application

I will attempt to explain how to setup a map, reduce, Combiner, Path Filter, Partitioner, Outputer using Java Eclipse with Maven. If you need to know how to install Eclipse go here. Remember that these are not complete code just snipets to get you going.

A starting point I used was this tutorial however it was built using older Hadoop code.

Mapper: Maps input key/value pairs to a set of intermediate key/value pairs.
Reducer: Reduces a set of intermediate values which share a key to a smaller set of values.
Partitioner: http://www.tutorialspoint.com/map_reduce/map_reduce_partitioner.htm
Combiner: http://www.tutorialspoint.com/map_reduce/map_reduce_combiners.htm

First you will need to create a maven project. You can follow any tutorial on how to do that if you don’t know how.

pom.xml:

<properties>
      <hadoop.version>2.7.2</hadoop.version>
</properties>
<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-hdfs</artifactId>
      <version>${hadoop.version}</version>
</dependency>
<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-common</artifactId>
      <version>${hadoop.version}</version>
</dependency>
<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>${hadoop.version}</version>
</dependency>
<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-mapreduce-client-core</artifactId>
      <version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-yarn-api</artifactId>
      <version>${hadoop.version}</version>
</dependency>
<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-yarn-common</artifactId>
      <version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-auth</artifactId>
      <version>${hadoop.version}</version>
</dependency>
<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-yarn-server-nodemanager</artifactId>
      <version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
      <version>${hadoop.version}</version>
</dependency>

Job Driver:

public class JobDriver extends Configured implements Tool {
      private Configuration conf;
      private static String hdfsURI = "hdfs://localhost:54310";

      public static void main(String[] args) throws Exception {
            int res = ToolRunner.run(new Configuration(), new JobDriver(), args);
            System.exit(res);
      }

      @Override
      public int run(String[] args) throws Exception {
            BasicConfigurator.configure();
            conf = this.getConf();

            //The paths for the configuration
            final String HADOOP_HOME = System.getenv("HADOOP_HOME");
            conf.addResource(new Path(HADOOP_HOME, "etc/hadoop/core-site.xml"));
            conf.addResource(new Path(HADOOP_HOME, "etc/hadoop/hdfs-site.xml"));
            conf.addResource(new Path(HADOOP_HOME, "etc/hadoop/yarn-site.xml"));
            hdfsURI = conf.get("fs.defaultFS");

            Job job = Job.getInstance(conf, YOURJOBNAME);
            //You can setup additional configuration information by doing the below.
            job.getConfiguration().set("NAME", "VALUE");

            job.setJarByClass(JobDriver.class);

            //If you are going to use a mapper class
            job.setMapperClass(MAPPERCLASS.class);

            //If you are going to use a combiner class
            job.setCombinerClass(COMBINERCLASS.class);

            //If you plan on splitting the output
            job.setPartitionerClass(PARTITIONERCLASS.class);
            job.setNumReduceTasks(NUMOFREDUCERS);

            //if you plan on use a reducer
            job.setReducerClass(REDUCERCLASS.class);

            //You need to set the output key and value types. We will just use Text for this example
            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(Text.class);

            //If you want to use an input filter class
            FileInputFormat.setInputPathFilter(job, INPUTPATHFILTER.class);

            //You must setup what the input path is for the files you want to parse. It takes either string or Path
            FileInputFormat.setInputPaths(job, inputPaths);

            //Once you parse the data you must put it somewhere.
            job.setOutputFormatClass(OUTPUTFORMATCLASS.class);
            FileOutputFormat.setOutputPath(job, new Path(OUTPUTPATH));

            return job.waitForCompletion(true) ? 0 : 1;
      }
}

INPUTPATHFILTER:

public class InputPathFilter extends Configured implements PathFilter {
      Configuration conf;
      FileSystem fs;
      Pattern includePattern = null;
      Pattern excludePattern = null;

      @Override
      public void setConf(Configuration conf) {
            this.conf = conf;

            if (conf != null) {
                  try {
                        fs = FileSystem.get(conf);

                        //If you want you can always pass in regex patterns from the job driver class and filter that way. Up to you!
                        if (conf.get("file.includePattern") != null)
                              includePattern = conf.getPattern("file.includePattern", null);

                        if (conf.get("file.excludePattern") != null)
                              excludePattern = conf.getPattern("file.excludePattern", null);
                  } catch (IOException e) {
                        e.printStackTrace();
                  }
            }
      }

      @Override
      public boolean accept(Path path) {
            //Here you could filter based on your include or exclude regex or file size.
            //Remember if you have sub directories you have to return true for that

            if (fs.isDirectory(path)) {
                  return true;
            }
            else {
                  //You can also do this to get file size in case you want to do anything when files are certains size, etc
                  FileStatus file = fs.getFileStatus(path);
                  String size = FileUtils.byteCountToDisplaySize(file.getLen());

                  //You can also move files in this section
                  boolean move_success = fs.rename(path, new Path(NEWPATH + path.getName()));
            }
      }
}

MAPPERCLASS:

//Remember at the beginning I said we will use key and value as Text. That is the second part of the extends mapper
public class MyMapper extends Mapper<LongWritable, Text, Text, Text> {
      //Do whatever setup you would like. Remember in the job drive you could set things to configuration well you can access them here now
      @Override
      protected void setup(Context context) throws IOException, InterruptedException {
            super.setup(context);
            Configuration conf = context.getConfiguration();
      }

      //This is the main map method.
      @Override
      public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
            //This will get the file name you are currently processing if you want. However not necessary.
            String filename = ((FileSplit) context.getInputSplit()).getPath().toString();

            //Do whatever you want in the mapper. The context is what you print out to.

            //If you want to embed javascript go <a href="http://www.gaudreault.ca/java-embed-javascript/" target="_blank">here</a>.
            //If you want to embed Python go <a href="http://www.gaudreault.ca/java-embed-python/" target="_blank">here</a>.
      }
}

If you decided to embed Python or JavaScript you will need these scripts as an example. map_python and map

COMBINERCLASS:

public class MyCombiner extends Reducer<Text, Text, Text, Text> {
      //Do whatever setup you would like. Remember in the job drive you could set things to configuration well you can access them here now
      @Override
      protected void setup(Context context) throws IOException, InterruptedException {
            super.setup(context);
            Configuration conf = context.getConfiguration();
      }

      @Override
      protected void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException {
            //Do whatever you want in the mapper. The context is what you print out to.
            //If you want to embed javascript go <a href="http://www.gaudreault.ca/java-embed-javascript/" target="_blank">here</a>.
            //If you want to embed Python go <a href="http://www.gaudreault.ca/java-embed-python/" target="_blank">here</a>.
      }
}

If you decided to embed Python or JavaScript you will need these scripts as an example. combiner_python and combiner_js

REDUCERCLASS:

public class MyReducer extends Reducer<Text, Text, Text, Text> {
      //Do whatever setup you would like. Remember in the job drive you could set things to configuration well you can access them here now
      @Override
      protected void setup(Context context) throws IOException, InterruptedException {
            super.setup(context);
            Configuration conf = context.getConfiguration();
      }

      @Override
      protected void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException {
            //Do whatever you want in the mapper. The context is what you print out to.
            //If you want to embed javascript go <a href="http://www.gaudreault.ca/java-embed-javascript/" target="_blank">here</a>.
            //If you want to embed Python go <a href="http://www.gaudreault.ca/java-embed-python/" target="_blank">here</a>.
      }
}

If you decided to embed Python or JavaScript you will need these scripts as an example. reduce_python and reduce_js

PARTITIONERCLASS:

public class MyPartitioner extends Partitioner<Text, Text> implements Configurable
{
      private Configuration conf;

      @Override
      public Configuration getConf() {
            return conf;
      }

      //Do whatever setup you would like. Remember in the job drive you could set things to configuration well you can access them here now
      @Override
      public void setConf(Configuration conf) {
            this.conf = conf;
      }

      @Override
      public int getPartition(Text key, Text value, int numReduceTasks)
      {
            Integer partitionNum = 0;

            //Do whatever logic you would like to figure out the way you want to partition.
            //If you want to embed javascript go <a href="http://www.gaudreault.ca/java-embed-javascript/" target="_blank">here</a>.
            //If you want to embed Python go <a href="http://www.gaudreault.ca/java-embed-python/" target="_blank">here</a>.

            return partionNum;
      }
}

If you decided to embed Python or JavaScript you will need these scripts as an example. partitioner_python and partitioner_js

OUTPUTFORMATCLASS:

public class MyOutputFormat<K, V> extends FileOutputFormat<K, V> {
      protected static int outputCount = 0;

      protected static class JsonRecordWriter<K, V> extends RecordWriter<K, V> {
            protected DataOutputStream out;

            public JsonRecordWriter(DataOutputStream out) throws IOException {
                  this.out = out;
            }

            @Override
            public void close(TaskAttemptContext arg0) throws IOException, InterruptedException {
                  out.writeBytes(WRITE_WHATEVER_YOU_WANT);
                  out.close();
            }

            @Override
            public void write(K key, V value) throws IOException, InterruptedException {
                  //write the value
                  //You could also send to a database here if you wanted. Up to you how you want to deal with it.
            }
      }

      @Override
      public RecordWriter<K, V> getRecordWriter(TaskAttemptContext tac) throws IOException, InterruptedException {
            Configuration conf = tac.getConfiguration();
            Integer numReducers = conf.getInt("mapred.reduce.tasks", 0);
            //you can set output filename in the config from the job driver if you want
            String outputFileName = conf.get("outputFileName");
            outputCount++;

            //If you used a partitioner you need to split out the content so you should break the output filename into parts
            if (numReducers > 1) {
                  //Do whatever logic you need to in order to get unique filenames per split
            }

            Path file = FileOutputFormat.getOutputPath(tac);
            Path fullPath = new Path(file, outputFileName);
            FileSystem fs = file.getFileSystem(conf);
            FSDataOutputStream fileout = fs.create(fullPath);
            return new JsonRecordWriter<K, V>(fileout);
      }
}

Hadoop: Commands

Below is a list of all the commands I have had to use while working with Hadoop. If you have any other ones that are not listed here please feel free to add them in or if you have updates to ones below.

Move Files:

 hadoop fs -mv /OLD_DIR/* /NEW_DIR/

Sort Files By Size. Note this is for viewing information only on terminal. It has no affect on the files or the way they are displayed via web ui:

 hdfs fsck /logs/ -files | grep "/FILE_DIR/" | grep -v "<dir>" | gawk '{print $2, $1;}' | sort –n

Display system information:

 hdfs fsck /FILE_dir/ -files

Remove folder with all files in it:

 hadoop fs -rm -R hdfs:///DIR_TO_REMOVE

Make folder:

 hadoop fs -mkdir hdfs:///NEW_DIR

Remove one file:

 hadoop fs -rm hdfs:///DIR/FILENAME.EXTENSION

Copy all file from directory outside of HDFS to HDFS:

 hadoop fs -copyFromLocal LOCAL_DIR hdfs:///DIR

Copy files from HDFS to local directory:

 hadoop dfs -copyToLocal hdfs:///DIR/REGPATTERN LOCAL_DIR

Kill a running MR job:

 hadoop job -kill job_1461090210469_0003

You could also do that via the 8088 web ui interface

Kill yarn application:

 yarn application -kill application_1461778722971_0001

Check status of DATANODES. Check “Under Replicated blocks” field. If you have any you should probably rebalance:

 hadoop dfsadmin –report

Number of files in HDFS directory:

 hadoop fs -count -q hdfs:///DIR

-q is optional – Gives columns QUOTA, REMAINING_QUATA, SPACE_QUOTA, REMAINING_SPACE_QUOTA, DIR_COUNT, FILE_COUNT, CONTENT_SIZE, FILE_NAME

Rename directory:

 hadoop fs -mv hdfs:///OLD_NAME hdfs:///NEW_NAME

Change replication factor on files:

 hadoop fs -setrep -R 3 hdfs:///DIR

3 is the replication number.
You can choose a file if you want

Get yarn log. You can also view via web ui 8088:

 yarn logs -applicationId application_1462141864581_0016

Refresh Nodes:

 hadoop dfsadmin –refreshNodes

Report of blocks and their locations:

 hadoop fsck / -files -blocks –locations

Find out where a particular file is located with blocks:

 hadoop fsck /DIR/FILENAME -files -locations –blocks

Fix under replicated blocks. First command gets the blocks that are under replicated. The second sets replication to 2 for those files. You might have to restart the dfs to see a change from dfsadmin –report:

 hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}' >> /tmp/under_replicated_files

for hdfsfile in `cat /tmp/under_replicated_files`; do echo "Fixing $hdfsfile :" ; hadoop fs -setrep 2 $hdfsfile; done

Show all the classpaths associated to hadoop:

 hadoop classpath

Hadoop: Add a New DataNode

DataNode:
Use rsync from one of the other datanodes you previously setup. Ensure you change datanode specific settings you configured during installation.

 hadoop-daemon.sh start datanode
start-yarn.sh

NameNode:

 nano /usr/local/hadoop/etc/hadoop/slaves

Add the new slave hostname

 hadoop dfsadmin –refreshNodes

Refreshes all the nodes you have without doing a full restart

When you add a new datanode no data will exist so you can rebalance the cluster to what makes sense in your environment.

 hdfs balancer –threshold 1 –include ALL_DATA_NODES_HOSTNAME_SEPERATED_BY_COMMA