Basic Neural Network Classification using MATLAB: Graphical User Interfaces

After training the nets with mean value of all attributes and it showed 97.2% correctness, now I will perform the GUI one 😀 because  it is a lot better than before in graphics, of course. Well, the pre-processing steps are similar to previous one, all you need is just preparing the data, install them into your workspace, and transpose it. But please notice that this experiment will use standard error value and worst value of each field. So, the class attributes are exactly same with the previous, but the atts attributes are not same, you have to copy the next ten attributes to have it.

Firstly, activate Neural Network Fitting Tool by this script and then a window immediately pop-out to your screen (yay!)
>> nftool

Nftool interface

Nftool interface

The first page of GUI just consists of brief explanation about NNF Tool. Similar with variable formats and targets used in Command Prompt, this GUI also use exactly same rule to define variables. Please prepare your variables before doing classification in MATLAB workspace to ease the next process. Click Next Button.

Installing Variable

Installing Variable

After installing variable

If you get the warning sign, it indicates that you haven’t transposed your variables. Please transpose it before doing the next step. Click Next Button.

Validation data window

Validation data window

This step is skipped from Command Prompt, but actually this step is adjustable by users. By default, the proportion between Training, Validation and Testing is 70:15:15. In other references suggest different scale, for example: 80:0:20, 60:30:10 or 90:0:10. The existence of validation part is negotiable, thus, in several books, it might be eliminated.

The bigger portion of training data might yield the perfect classifier rule but it also increases the probability of having over-fitting model which will cause low performance. In contrast, the lower part of training data can cause under-fitting model which means that model doesn’t cover enough the requirements through fewer given samples. The error value of under-fitting is usually worse than over-fitting’s error value. In order to make it apple-to-apple comparison with previous one, let it be the default one. Click Next Button.

Network Architecture

Network Architecture

Congratulations! you don’t have to type anything in command window to view your network architecture now. Input your preferable hidden neurons which are contained in hidden layer. 20 is the default value. In order to make it apple-to-apple comparison with previous one, let it be the default one. Click Next Button.

Before training

Before training

After training

After training

To perform training in built neural networks, click Train Button. The results will be displayed in top-right container. After training phase, MSE and R value will be fulfilled according to each data portion. Click Next Button.

Evaluate Network

Evaluate Network

This interface provides opportunities to re-train network if the result has not been satisfying yet. Re-train can be performed by adjusting network size or import another large dataset. Moreover, it can also perform another test with additional dataset test. Click Next Button.

Saving Result

Saving Result

This graphic makes a big difference in operability of data saving. It also helps users to generate M-File to operate the created Neural Network architecture in other occasions and needs. Click Finish Button after Click Save Results Button.

Then in your workspace will be like this

Workspace after

Workspace after

Let’s do the confusion matrix, but, do not forget that the output have not transformed into binary digit yet, we still have to round and plot confusion manually 😀
>> output = round(output)
>> plotconfusion(class2, output)

I bet that my result is worse than previous one. It caused by the performance value is highly different. The first one was 0.0151, meanwhile, this one is 0.4 😀 Here is my confusion matrix.

Confusion Matrix 2

Confusion Matrix 2

Huwaaa, my correctness decrease from 97.2% to 92.1% hahaha.. its okay, it means that the standard error has a worse classifier to predict compared to mean value. Lets compare with the worst value one..

Confusion Matrix Worst Value

Confusion Matrix Worst ValueTraining MSE

Training MSE

Wow, I have 98.8% accuracy through Neural Nets 😀 Well, it can be concluded that the worst value or the highest value have the best performance to predict cancer than mean and standard error value 🙂

Let’s try with another tool and classifier at the next trial 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s