Saturday, December 29, 2007

Week 10 (24 Dec - 28 Dec)

Monday, 24 December 2007
Today, is Monday, 24th of December 2007. X'mas eve. Which means that everyone gets to leave work early at 12.45pm. Today, we left work early, but it was at around 4pm, earlier than normal days I guess.

Today I still continued to research and refine my code on detecting a single point within any given shape, represented as an array of points. And finally, it reached a workable and acceptable state. I initially wanted to improve more on the code, but I guess with the datelines for SAB nearing, we'd better combine the code first. Better one working program with some bugs than two perfect files with no working output, JL reminded me.

And thus, we took code 1 and code 2 and mashed them up, and my luck, we got a working program. Well, not really by luck, but by a lot of re-editing the method signatures and copy and pasting code from one to another. Alas, a working game code, just with a couple of issues to iron out, such as the scoreboard, threading time, debugging outputs being printed at the background, etc.



Tuesday, 25 December 2007
Today is Christmas. Did I get many presents from Santa? Well, I've got all my friends around me. That should be enough for me, right? :)




Wednesday, 26 December 2007
Well, what else could we have done today, save for improving the game code? (its called whackapeng.c, btw).

Well, there are 2 major changes which we made today, and shall highlight them here:

Firstly, we ironed out some bugs in the game play, to name a few, were the encircling counter and circle sensitivity. It was hilarious when we started to test the program, and after attempting to draw 10 shapes (for level 1) to see if the code works fine under a happy flow, we realized that we had forgotten to add code to keep track of how many shapes have been encircled. Interesting.

The second problem we faced as that by drawing a small enough square/rectangle, by just basically completing 2 out of 4 sides, and a little start on the third side, the program will immediately assume its a circle (due to its small difference in diameter of the 3 random pair points) and draw a circle, which, if enclosing the center of the shape, will be considered as a successful shape encircling. Thus, I had to reduce the sensitivity of the circle detection, dropping the difference in diameter from 12 to 10. This change did not eliminate the problem, but only reduced the chance of it from happening, and the player had to draw a really shape to get it. Alternatively, was to totally trash circle detection for the game play, since the code for n-gon point detection will work fine for circles as well too, I believe.
Decisions, decisions.

We also added a high score board after the successful encircling of the shapes, though its just a dummy board as we have yet to think of a way to perform I/O effectively for the scores and the possible logo. hmmz.



Thursday, 27 December 2007
Well, with the main functionality of the game working, its time to improve it, pondering about the overall threading issue + file I/O.

One of the problems faced for the threading is that after every 20 seconds, the program will clear the screen of all drawings, just in case the user has drawn too much on the screen to effectively continue the game. Though when the thread that is in-charge of this goes to sleep for 20 seconds, it loses track of the game play. So even if the game had already moved to the next shape, or has been manually cleared (if ever it occurs during actual deployment), it would be impossible to alert the thread to stop and restart the count down timer, so that the clearing of the screen will effectively take place 20 seconds after the new drawing has been rendered on the screen. Or is there? hmmz. (I wonder if there is something akin to Java's wait() and notify() methods?)

File I/O is just thinking of how to solve the storing and retrieving of high scores for each level and the possible corresponding player logos for each of the high scores.

Been thinking...




Friday, 28 December 2007
I'm still working on some improvements of the game (e.g. overall threading issue + file I/O). Still not quite there yet. Took the last 3 hours of work or so to practice and revise for my SCJP exam, which is tomorrow. Hopefully I will do well :)

In addition to today, JL did a very cool code of using the light to drag primitive shapes around the screen. I can already see the HCI forming. :)




Reflection of the Week:
What did I learn for this week? Well, basically, its performance and the effect of inefficient code.

Lets start with performance, which I would be referring to code performance, as our entry point. Many times, our deployment environment or developing environment is quite powerful due to the use of modern technologies or high powered hardware. Which indirectly leads to more inefficient code being written. Why do I say that? When coding a function to solve a problem, there is little regard to the amount of memory that will be used by the function or the entire software. The code will run with little or neglectable performance degrading.

Thus even if the function is very inefficient, it won't really matter cause during development phase or (even deployment phase), only one application is tested at a single time, with a possibility of core 2 dual processor at 1.8GHz running in the back ground and 2GB of RAM. With regard to my experience in our school projects (again, this leads to school. But hey, isn't this supposed to be a reflection of my school experience vs the experience I am having now?), we develop applications that take little consideration of efficiency. Sure, we know what is the definition of efficiency, but do we really know the finer details of how to go about doing it in our code?

There was once I wrote a 1800 line class file for a very simple application - well, it probably beat the length of many people's code, but I bet it also beat them in being the most inefficient code. It had so many repetitions and nested loops, but when tested on my computer (Pentium 4), it was totally not a problem. Now imagine we put that code up on a not so powerful server in Singapore, and this application becomes really popular, with up to 3,000 concurrent users worldwide online calling it concurrently. Now we're really going to see the strain on the server. But if back then I've written really efficient code, perhaps of a couple of hundred lines, the strain would be much lesser.

So how does this relate back? Many times, the ideas that come to my mind to solve the problems I face within the code are not really the best way to go about doing it. Cramming all the code within a nested for-loop in a single thread and calling it to execute its loops repeatedly would probably cause strain on the program, which can be seen by the time it takes between when the light is captured by the camera, to the time it is rendered on the screen. The longer the time needed for the program to process the code, the larger the gaps become.

So conclusion? Thinking before coding :)

What a mess!

Thursday, December 20, 2007

Week 9 (17 Dec - 21 Dec)

Monday, 10 December 2007
Week 9 has come (and gone, for when I'm typing this entry, its Friday). The presentation during the mid-semester briefing/meeting is today.

Anyway, it went well actually. With some people laughing at us at the beginning before we started for some strange reason which we both cannot comprehend. But they liked the video a lot - which is attributed to CT's fantastic skills in creating the video and JL's interesting music.

Anyway, maybe we're making another one..?



Tuesday, 11 December 2007
ReachOut - Beyond Social Services is today. We went over to NTU Alumni Clubhouse in the morning and prepared for the games. For my "Amazing Scientific Race" station, I was with Harold and we shifted indoors due to the threat of rain. We talked about gravity and air resistance in the form of a fun game, which consisted them of throwing paper airplanes to reach the furthest distance (further = more points). And they got points which they could claim for prizes. Although the kids were very energetic which was a good sign, though I'm not sure if the handling of them was very effective, with all the overlooking for certain things which they should not be doing.

Anyway, after the race, had a buffet lunch at 8 Degrees Restaurant (so familiar) and when back to IHPC for a movie screening of Ratatouille, in conjunction with the staff movie screening for December. Well, since I've not watched that movie, might as well sit in and enjoy myself. :)

At the end of the day, though we were all tired from all the activities, we sat in for an AC Xmas retreat and enjoyed a bit before heading home.





Wednesday, 12 December 2007
Back to work. We started the day with manually calibrating the 4 projectors and learnt a bit about projector keystones, the effect is has when projecting while tilting up or down, as well as how the 4 tiled projectors were set up.

It took a lot of fine-tuning and adjusting before we managed to get the projectors more or less aligned. So now we have 4 screens where we can drag our windows on - which gets confusing sometimes, because I'm so used to drawing on a reversed canvas for the shape detection that when I draw on the normal canvas, I keep moving in the wrong way.

Oh well, gotta get used to it. Anyway, Kevin briefed us that we have the SAB presentation on the 8th of January 2008. We reviewed our milestones and projected end products and decided to give the game a while more before we move on to developing the HCI. However, this would mean that we will leave the game at a certain state where it is stable enough before moving on and coming back to it later to refine it. Oh well, few more days of the game code, gotta work harder!

Now focusing on point within a polygon detection. With the previous code for circle detection, it is possible to find the center of the circle to match the center of the object drawn. However, with a polygon, it is virtually impossible to find the center of it as there is no knowning before the detection, the number of corners the polygon will have. But some people will ask, why so? If the user draws (approximately) 4 right angles, it should relate to a rectangle/square, right? Or a shape with 3 corners would result in a triangle, won't it?

Not so! As the user's hand is not necessarily stable and the camera may not exactly capture light in a straight line. Result? A square with 4 right angles but more than 4 points, or a triangle with 4 points. - totally unpredictable.

So, without being able to find the fixed center of a n-gon without invoking very complicated and resource intensive mathematics, how can we find the center of a polygon, if possible, or to find out if a point is within the polygon, given a series of points/coordinates?

The Solution - I'm working on it!



Thursday, 13 December 2007
Today is a holiday. I spent the better half of the day studying for SCJP.



Friday, 14 December 2007
Okay. I'm lagging behind in my postings. Let me recall / summarize the things I've done for today.

Doing more research on Wednesday's problem. Searching and researching. Well, there are numerous interesting links, including a Fortan code written by someone a long time ago. Things like Mathlab and other related programs/framworks do repeatedly resurface in the search results, though it is highly likely using another program altogether would fix the problem.

Determination finally paid off, and towards the end of the day, I've finally found a suitable method. Well, it is not surprising that the C code solution is written by the same person who had written that Fortan code a long time ago. Well, looks like he's very kind to redo it in C, since majority of people use C/C++ out there.

http://softsurfer.com/Archive/algorithm_0103/algorithm_0103.htm


Well, let me do a summary of the algorithm here. But for more detailed information and the code implementation, please feel free to visit the website above. So, without further ado, the algorithm, which is called "The Crossing Number", goes something like this:

Accepting an argument for the single point (P) to be checked if it is inside the polygon, and an array (and where array[n] == array[0]) of all the points, it will draw a line from P, towards the edge of the canvas, towards the direction where x = infinity while keeping y constant (i.e. towards the right, parallel to the x-axis on a x/y graph). It then traverses the array and for every two points (i.e. Polygon Boundary Edge) , it determines if the line will cut across it. It then counts the total number of times the line passes over the entire polygon boundary. If the total number is even, then the point is outside of the polygon, else it is inside.

Pretty ingenious right? Using the knowledge of this new found methodology, I proceeded to manipulate it for my own selfish means and finally overcome the problem of detecting a point inside/outside an n-gon, which was resurfacing in my mind frequently for quite a while now.

Another thing I've learnt (although it is quite a while back) is that the coordinates (0,0) of the image is actually the top right corner, not the initial top left corner I have always assumed. Probably because during the circle detection, I was using an unflipped version of the canvas, causing everything to have a mirrored effect.

Okay, that's all for today. Hope you enjoyed reading.

Reflection of the Week:
I'm not sure if I have mentioned it previously, but I joined a Yahoo! Group regarding OpenCV as well as thronged through many forums and websites to search for tried and tested (and even untried and untested) methods to solve my problems. One thing I've noticed in all my surfings, including surfings from the past 3 to 5 years (5 being my first arrival to the Wired), is the lack of initiative that exist in there. Well, let me say that I am not stereotyping by saying that everyone inside a forum or user group has no initiative, but rather, a small number of people (and their outstanding incidents) have got me wondering if these people joined the group just to ask/demand/expect for answers. Although the way they ask their question may be slightly rude and demanding, the other great members inside the forums tap on their pool of patience and knowledge to try to answer them thoroughly and patiently.

What are the incidents, one might ask? Well, it usually goes along the lines of:

"I am developing [project details] and have an error. Can someone help me out? [insert entire source code here, consisting of a/several complex class(es)]" or "Can someone tell me the answer to [insert a question that has already been answered by other earlier threads, or can be easily found by searching Google]"

Well, it is true that forums and user groups are a source of information - to find solutions to problems, but should we be abusing it by pasting our entire source code (or worse, only a portion of the source code with no explanation and expect others to know the rest) there and expecting people to scan through the entire codes to understand the logic and find out what is wrong? Or demanding a quick answer to their problems which can easily be solved if they attempted to try (such as by looking at the API available online)?

IMHO, if there were at least attempts to try to understand their problem and only ask questions towards a narrower scope, it would have been much more easier and quicker for people to respond. Anyway, who would be inclined to find problems in codes amount up to 1000 lines with no incentive in return?

Perhaps, a better way to put it would be:

"I am developing [project details] and have encountered a problem which I cannot seem to comprehend. I've checked the server logs and it mentions that I have an buffer overflow thrown on line 35 (highlighted in red). I did a search on the method being called and did not notice anything wrong with the return type, neither does it throw any checked exceptions which I am supposed to handle. Could it be that the variable passed in as the argument during the for-loop is null or of wrong type? I know it might be tedious, but can someone kindly help me out, please? [insert source code of the method which the error occurs in and any related methods.]"

And of course, a little thank you after the answer has been posted and an update of the problem solving is appreciated, to show that their hard work and time is not wasted after all. Well, perhaps after going into that level of detail to state their problem within a narrower scope, they themselves would have been able to notice the error and solve the bug in their code before they even hit the "post thread" button.
Thankfully, only a small handful of people behave like the former; the rest behave in a much more matured and well-mannered fashion.

Negative online etiquette aside, I found many people at these forums or groups helpful and passionate to share their knowledge. Perhaps they are the ones, whose posts we should also value more, who make the forums a good place to hang out and learn. :)


Ciao for Week 9!

Tuesday, December 11, 2007

Week 8 (10 Dec - 14 Dec)

Monday, 10 December 2007
Week 8, and this means 8 more weeks to the end of my internship at IHPC. Giving it some thought over dinner, I am definite that I will miss IHPC and the Cove once I'm gone. Working here at IHPC, though only 8 weeks, have not only been fun, but also an educational and eye-opening experience. *opens eyes*

Now that I've opened my eyes and am finally awake, lets start this week fresh, energetic and in the mood of X'mas. I want my play-doh please. Whoops.

Odd shape detection is very challenging. With all the strange point coordinates its returning me and attempting to draw lines to. After spending some OT on Friday to separate the method used to draw the coordinate and calculate the corners (making it much more neater and easier to understand), I sat down the entire morning to do a code walk through. My waste-paper stack cum mouse pad is increasing with every paper I use. :)

Anyway, finally managed to more or less solve the problem before I called it a day - the coordinates being passed into the array of points seemed to be giving very strange numbers, such as (18912345, -1452) and (-523, 8), resulting in the lines being drawn all over the place, in an attempt to connect these nonexistent points which seem to fluctuate over time. With the time hitting 6.30pm, I decided to leave the office to grab some chow on my way home...

I've given some thought about the skeleton for the slides on Monday - hopefully I'll be able to get some interesting pictures to add to the slides. And that they won't bore anyone to sleep.

*back @ home*
Starting for SCJP is not easy. I've spent 0.0 hours today studying about it. The thought of it resurfacing regularly in my mind though, reminding me to study, but alas...

Picture of the day ...

I mistook this as a pillow by the roadside. Guess I was still sleeping.



Tuesday, 11 December 2007
RedSteel - and the "oh yes" scene.

What should I say? Eureka! or whatever word that has the same meaning as it.

Before lunch, finally manged to find out why the coordinates for the points were so strange - they were actually values of the memory addresses which I was using thanks to a addition of a wrong variable to the array index. After lunch, managed to fix that problem and move on to ironing some other bugs, such as deleting some unwanted repeated code to make the program more efficient (hopefully) so that it does not skip a few points when the total number of points and pointers increase [which happened a few times for some strange reasons, where the total # of points increased up to 110+ but the program was still reading the points from around 105 ].

After 7 days, managed to find out what is wrong with the odd shape detection code, solve it, and tidy up the code on my side a bit as well. I've also improved my knowledge of the parameters used in various OpenCV functions and its parameters. The idea behind it is still the same, taking only the points which are required to draw the n-gon shape and draw them, skipping those points on the contour which are already drawn, or contours with only less than 3 points.

This n-gon (or odd shape) detection is by far one of the longest task which I have done for Lightdraw so far. However, the experiences and challenges it brings is something worth exploring and solving, as the focus moves away from what I usually do in school, web dev, to something a bit closer to Computer Visioning and pattern recognition.


Pictures of before and after:
Before...

After...




Wednesday, 12 December 2007
With the odd shape detection / n-gon shape detection done, we moved on to discuss the rest of the game play and the immediate step to take. JL did a great job on the .png pictures which were used as the encircling symbol for the game play.

The next step would be to detect a single point collision within a shape via (ideally) a sequence of points of the shape or (not so ideally) by redrawing the shape.

In order to properly and more effectively get the sequence of the points and perform this encircling detection, it was suggested to perform "point merging". Which is to say, if on the n-gon where there are a couple of points which are closely positioned of X units between, we would merge them as one point so that the shape will become more defined, and that there will be lesser points to draw on the screen. Which will in turn lead to better point collision code. Hopefully.

After lunch went to double check the code and tested it to see if any hidden errors / memory leaks were hiding before updating of the SVN.

Just when we were all ready to start with the next step, we were reminded that we had a presentation to do on Monday and Kevin suggested that we prepare some slides and do a sample presentation to him so that he can give us areas of improvements.

Well, seemed like a pretty straightforward task to me. (or so I thought before Thursday)

Picture of the day:


Cake



Thursday, 13 December 2007
We focused mainly on the slides today which I have done up yesterday to present on next Monday. However, having not done presentations for a long time, I had lost touch of many presentation skills and did not synchronize well with JL, partly due to my complicated and over-wordy slides.

Naturally, our presentation didn't go as well as we would have liked and were given numerous areas of improvements for the presentation, including rewording and redesigning the slides. Reworking of the slides took place over throughout the rest of the day (and night) as the next demo presentation was in 24 hours after the first one ended.

After the presentation, we started filming the short video to complement our presentation. with JL as the director and myself as the star, who knows what rating will our video get. R21? Just kiddin'.

The idea may be good, but without a good presentation, no one will buy the idea.





Friday, 14 December 2007
Today I learn the meaning of 3 minutes. And good customer service.
That aside, today we sat through some presentations of some AC staff. We observed how they presented and what points which they did well on, and where they can improve. Indeed, every day seems to be an eye opener. It seems that many people fear presenting, and occasionally choosing to memorize their slides/speech, which should not be the case. As memorizing (IMHO) would mean that the speaker would not be flexible in his speech and panic when he/she forgets the words or point sequence ( i learnt that the hard way from the MSP dinner too ). The memorized sequence would partly seem to be very mono-tone and no tone of enthusiasm for some.

Anyway, after which, we all headed down to doc green for some healthy food ( a man of my word ). And some humorous incidents at the Vaio roadshow. And then back to the Cove for more slide editing and video filming before finally presenting to Kevin and Harold again. This time round, they mentioned that we have improved from the previous time, though there were still many points to take note. They gave many valid points of improvements for the slides as well as some of their learning points from previous experiences. Thank you, to Kevin and Harold, if you are reading this post. :)

Finished up some last minute work on the slides, and finally left the office at 9pm.

We take some, we give some.

Picture of the day:




Reflection of the Week:

I always thought that presentations relied only on the speaker - How confident the speaker portrayed himself and how detailed he elaborated about was somewhat all that mattered. Of course, the speaker must know his/her stuff thoroughly enough to do so.

Or at least that is my perception until this week at work.

When we got the points of feedback pertaining to our first presentation, it was then I realized how much things that I have been doing wrongly for my past few presentations in school. It is true that while presenting, we do a few things which we do not consciously take notice of, such as swinging our arms and playing with things in our hands. And we need people to tell us that we are doing such things before we actually realize it.

I also thought that speech came fluently to the speaker so that abstract slides, or even no slides, can still get the message to the audience within a short period of time. But again, I was proven wrong as I stumbled a couple of times during our rehearsals and got tongue-tied. Perhaps I need to speak slower in order to be concise and clear.

In terms of abstract slides, to me, I know what that one word means on the slide and/or how great a screen shot may be, in terms of time and effort spent. But putting myself in the shoes of the listener, the word is just another word and the screen shot might be just another possible photoshopped picture.

But this experience is great, as in school, the presentations were focused mainly on the way we portrayed ourselves (formal attire, etc) and the project which we have done (the so and so system which does ABC). Rarely were we actually corrected us on things like how we should not read from the slides, speaking too softly, not making eye contact, body movement, not presenting confidently, etc. The focus, was more or less on attire and project.

I'm not saying that there was totally no help or constructive feedback given to us by our tutors and peers. But perhaps the state of seriousness (or the lack of it) of the situation inside a classroom with all our peers looking equally worried as they frantically coded and amended their code before their turn (myself included) while they half-listened as us did not really drive in the point. Also, with our friendly tutors as our evaluators, the familiar faces reduced one of the stress and pressure points on us.

Peer to peer evaluation is equally important and should be taken seriously. Though it is rare that peer evaluations included pointers on presentation skills - more on the project and teamwork issues. In my experience, very few friends have came up to me and actually told me in my face that I was doing something wrongly. Of course, if feedback came my way, I would have to put aside any emotional feelings and see the feedback as it is, and not from who it was said by.

Well, at the end of this post, would just like to summarize the learning point for this week, which is: good presentations need quality visual aids to compliment the presenter. When given feedback, accept it modestly, thank the person and work to improve on it if it is a valid point.

Anyway, hope Monday's presentation goes well.

Monday, December 3, 2007

Week 7 (3 Dec - 7 Dec)

Monday, 3 December 2007
Ahh! Monday, a start of a brand new week. After the long weekend, inclusive of being burnt from a 22km kayaking expedition around Pulau Ubin, its back to work.

Circle Detection and Rectangle Detection seems okay - but a problem arises from the code: When drawing of the rectangle or a 4-sided polygon, the user may have an unstable hand or be unsure of the direction to travel to create the appropriate shapes - which may result in a rectangle having 4 distinct sides. But in reality, after the cvDilate-ing and cvCanny-ing it, there will be 1 to 3 additional points detected along with the 4 distinct corners due to the uneven width of the light and the straightness (or rather, the un-straightness) of the line being drawn.

Solution: to modify the rectangle detection code in such a way that it is able to dynamically detect shapes with 3 or more corners and draw its outline. This way, there will not be a need to have a set of code for each shape, starting with a triangle, polygon, pentagon, hexagon, etc.



Tuesday, 4 December 2007
Tuesday.setTasks = Monday.getTasks();

I'm still doing Monday's task.

Some trivia: RSVP is the abbreviation of the French phrase répondez s'il vous plaît,




Wednesday, 5 December 2007
This blog has been discovered! There is no more hiding in the shadows of my inner random thoughts. Perhaps some censorship is needed in the following 9 posts (9 weeks). Oh well, just do until told not to. :)

Odd shape detection. Still on it - still redefining it. Managed to get a more or less accurate code up and running and understand the source code more, including the one line if-statement again, after asking Harold about it and being reminded that I was not learning based on my first week's post. Perhaps I should really take time out of a weekend to read that book I borrowed about C as well as the C++ books that Mr Yeak has kindly lent me.

Anyway, we also discussed a bit about the concept game, its game play and some rules and things to take note. We got a list of tasks to do and will start working on it as soon as I get the odd shape detection with separate polygons being drawn without being connected.

Ms Chiang sms-ed me today informing JL and myself that we had to do a short 10-15min presentation about the lightdraw project to our cohort during our mid-sem briefing (17 December 2007). We discussed this with CT and Kevin and decided to do some progressive introduction via a slide show and a short video show casing lightdraw in action. Time to factor in some time to do script and slides within the next 2 weeks. Probably, there goes my free nights and weekends (if I had any to begin with) :)

And one more thing, Merry X'mas.





Thursday, 6 December 2007
The high wall remains firmly rooted to the ground, unbeaten by the multiple attempts that we have put forth to bring it down. Swaying slightly in the breeze, it looks down on our feeble attempts and laughs.

Perhaps in my imagination.


I'm still doing odd shape detection with separate polygons. There's some progress - at least I know where my previous algorithm went wrong. The corners of the polygon being detected do not necessarily get detected sequentially, thus in a pentagon, the 3rd corner drawn may very well become the first corner in the detected contour. This makes slicing the sequence of points based on the number of corners (like the polygon detection) impossible as there is no way to tell the number of points the user-shape may have, even for a regular polygon, due to the width of the light detected by the camera and the non-linear hand movement across the screen even for straight lines.

Task now: Thinking about how to effectively separate/slice the points, belonging to a single closed shape, from the main sequence of points.



Friday, 7 December 2007
I think I've got it - but only in theory. It took me the entire morning to figure it out, and the entire afternoon to duplicate the code such that it worked via two functions instead of everything lumped together. (Perhaps I'm not understanding the code enough?)

The way that I have thought about to detect if the points within the sequence is by checking, based on the original picture, that the midpoint between them is/are of equal colour, with consideration of thresholding. If they are, that will mean that there is a line drawn between them and they are belonging to the same shape, connected by a line.

However, I've only managed to think about the concept of it. If this concept fails, I'll have to go back to CT's way of manually checking each pixel and defining each pixel to see if it belongs to a certain pattern. Which we suspect will be very slow, and will further cause lag to the software's performance. But we will never know unless we try and then again, with the Mac Pro so powerful...



Reflection of the Week:
This task on odd shape detection for 2 or more shapes is taking quite a while to do (so far till friday, its been 5 days). Well, I admit that perhaps the confusing usage of the various parameters to pass into the functions is one of them. With many parameters being generally explained or the use of terminology with regards to computer vision...

Anyway, that aside, Kevin mentioned to me earlier in the week about how working in an environment outside of school is different in terms of real world experiences. Which I have to agree to some extent. In school, we are unconsciously being spoon-fed to some extent. When assignments are given to us, we already know that they are do-able, and that the answers we seek are inside the lecture notes. We know what is the criteria to score a A/B/C/... (at least for the old system I went through).

Again, I have to clarify my point that it is not that I hate studying in school or I hate being spoon-fed - when I first came into TP-IIT school, I needed all the help I could get to do a simple Hello World program. Spoon-feeding to some extent in school is not a bad thing as isn't it by looking and understanding how others do it that we learn to do it ourselves, and do it better? But just that when it comes to getting exposure on the various industry stuff and what not (e.g. SVN), perhaps these attachments are most ideal in getting the student hands on experience in this aspect.

Working in an environment outside school is akin, IMHO, to doing the bonus task of the assignment, which targets a totally different segment of the working software. However, that aside, for assignments we could always ring on Dr Eng's staff room extension and ask for advice when we meet with some problems (whoops, sorry for disturbing you for the past 2 years, Dr Eng!). But outside of that school boundaries, perhaps some of the problems we are attempting to solve are not even solvable with the state of the libraries/technologies at this point of time (opposite when compared to assignments which we know are possible).

Now, there is no number to call for specific advice with regard to the part of uncertainty. Again, not that I am complaining - it is a good way to gain experience and learn how to solve my own problems. Using forums, interest groups, APIs, white papers and what not. Thankfully, I took up the CDS of "Using the Internet as a Research Tool" (thanks Mdm Jamila) and it is perhaps time to refer back to some of the first few lessons I've learnt in that subject..

At the end of the day, each approach has its pros and cons. And regardless of each phase, we should make the best use of what we have to fully learn as much as we can from our differing experience and grow/develop ourselves.

Next week, I throw another thing up in the air to juggle with my 24 hours each day - studying for SCJP, its test on 29 December. Will I survive?

Wish me luck!



Reminder to myself: write maintainable code...

Wednesday, November 28, 2007

Week 6 (26 Nov - 30 Nov)

Monday, 26 November 2007
I wonder why people call it Monday blues? There was not a single outstanding blue colour in today. Nevertheless, I woke up feeling like I need more sleep. Anyway, back to work on week 6 - effectively making it 10 more weeks to go.

The day started with polishing up circle detection and updating it into the SVN repository. However, a slight glitch ended up with our main file, lightdraw.c being updated with a out-of-date file, which we suspect is suffering from a memory leak. Thankfully, we had the working copy of the code located in someone's computer and managed to revert it. Anyway, SVN has the ability to revert the code to a previous revision in the event of such an accident from happening. Anyway, yea.. that's about it for the morning.

In the afternoon, we visited Dell Computers and talked to their MD with regard to loaning a computer with 2 dual output graphic cards (4 monitors). Its specs are good, with it being able to support a game rendered across three screens as demo-ed in their lab. The reason why they had lent it to us was so that we are able to harness its power to create a multi-screen software to run on its 4 monitors, showing that the ability of what 4 monitors can do, as compared to one. Nevertheless, we brought it back to the office and hooked the system up, but not before looking at its innards.


A cool piece of hardware, hopefully we'll make full and good use of it during its precious time with us. :) With the addition of the new machine, we have also partly reworked the layout of the office to cater to the addition to our new addition to the family. With that, we ended another exciting day at IHPC.

Heard that most people enjoyed themselves at the company's dinner and dance. Cool. Well done Bernard for winning the first prize!


Tuesday, 27 November 2007
Today was spent at CMPB doing my pre-enlistment medical check up. After 4 hours of checkup and doing the test, I am certified as Pes A L1. So is that a good thing or not? hmmz...



Wednesday, 28 November 2007
Today I started with square/rectangle detection. Initially had searched around for Hough Transformation, but its parameters had to be specially catered for each picture of different nature. For example, a picture of a building will have different parameters for thresholding passed into the method as compared to a picture of some bathroom tiles - and finding the right combination of parameters was no easy task. At least to me.

Anyway, after that, we revisited circle detection, with some brainstorming on the various rules we would like to set for the game. JL did a very good job in creating a square on the black canvas and the user/player, would have to draw a circle around it to make it disappear. Something simple for now, but I believe will get more complicated. However, the way which circle detection works for the software is quite strict on the way the circle is drawn. The circle's average diameter between 6 evenly spaced out points (3 pair points) had to be no more than X units different away than compared to the circle drawn by the computer.

On the way home, JL suggested that perhaps one area to look into is that to ensure that the contours are of a certain length before calculating if it is a full-fledged circle. Which might be a valid checking as if the user were to just flash the light once in the direction of the camper, the canny method will cause it to appear just like a contour, with the diameters being close to equal than that of the circle being drawn. Thus the loophole in the game was that if the player were to flash a light quick enough at the center of the square, it will take it that the player had drawn a circle around the square's center. A cheat, probably? :)

So to improve on the algorithm, we discussed about it for quite a while. Kevin suggested modifying the conditions such that they had to be of ratio less than X value before the shape is considered a complete one. However, drawing a long rectangle (somewhat similar to a light trail across the screen) would return false for this checking.

In theory, it will be unfeasible, but in practical implementation...?



Thursday, 29 November 2007
With circle detection being worked on by JL for collision detection ( or rather, encirclement detection ), I moved on to using another algorithm, as suggested by CT - Hough Transformation. Hough Transformation is a rather mathematical approach for detecting circles on the input image. However, it might have worked too well as even normal hand drawn circles were not detected as circles for its lack of appropriate curves of the arcs (or might be my handdrawing is lousy).

Also touched a bit on Polygon detection (but not squares, since squares were detected as circles as well as its average diameter and length of their sides are quite the same). Referenced code on the net and the sample OpenCV codes and edited them to my function. Again, worked really well, but perhaps too well as sometimes rectangles had a slight bump in them (due to shaky or crooked lines from the torchlight) which made the program think it was 5 or 6 corners instead of 4 corners.

Though for other polygons, such as trapeziums and rectangles, they worked fine with the code.

Anyway, Bernard had won the first price at the D&D - which was a Wii. We spent some time playing with it on Wed and Thurs to see what it was really like to play the Wii, after hearing so much about it. It was really cool, using the infrared and velocity estimation (i assume), it was able to detect how much force to use or what action to do during the game. Another great HCI as compared to the keyboard/mouse pair of the traditional gaming controller. However, after playing bowling for a while, my shoulder joint started to ache. Haha. Guess its time for me to get more in shape, by bowling? :)

Anyway, here is one of our Mii characters. I won't say who it belongs to though...




Friday, 30 November 2007
Nothing much can be said about work today. Just did more on circle detection using Hough Transformation (cvHoughCircles). Read Thursday's post for more details.

After telling CT that Hough transformation was rather too accurate, he suggested Morphological Thinning, which meant thinning of the edges of the shape so that it would appear, ideally, as a single line 1 pixel wide. But the point was that the shape drawn was not linear enough to do thinning equally on all sides, as a result, some parts totally got thinned to nothing. Well, I guess there are some pros and cons to each methodology. Lastly, I did a combination of both techniques, but results were not as satisfactory. Maybe its some parameters I used?

Anyway, on the way back home, I witnessed a particular incident - a lady had boarded the train with her two young daughters - unfortunately, there were no free seats available at that time, but thankfully, a young man gave up his seat to her. Well, as all loving mothers would do, she let one of her daughters. The other, she had initially carried her other daughter, but I suppose she got a bit tired so she put her down in front of her older (i think) sister. The best part? The people sitting on the left and right of the mother and two daughters, did not budge. Instead, they just looked at them. Interesting huh?





Reflection of the Week:
Well well well. End of the 6th week, which means another 50 days to go (excluding the weekends of course).

With the nature of the company being a research-based one, there is usually no rush for datelines and/or bindings to specific tools. It was mentioned a couple of times - datelines are quite relaxed and there is the possibility of thinking "what-if" during the project, instead of "how-to".

I also had to keep on reminding myself that I should set a dateline and goal of what I wanted to achieve each day, so that I do not end up coding until 7pm-ish each day. With a proper goal in mind, I can work towards it with confidence and speed.

Last point regarding work, after a while, the objectives become less clear over time and milestones have to be set. Initially we were supposed to create nicer light effect (with edge blending), then work on the game, which is in progress. But because the "game" mode's details about gameplay etc is not really defined, we sometimes veer off course on what we should develop and research on, moving all the way to cross-detection and what not. As such, I found it useful to have mini milestones, even if it is an individual milestone to work on, so that I know what I am achieving or need to be achieved. Speaking of which, I better check with JL after I do the shape detection on what should the proposed specs be...


Another thing I noticed this week is about human nature (off topic on SIP). On Sunday, I attended a Kayaking race (or Canoing Race, if you prefer) and we had like 150+ participants. With many people being their first time attempting the race (me included), they were unsure of the currents and choppiness of the water out at sea, especially near the coastal edges of Ubin. And well, quite a number of them capsized. Even as participants in the race, my friends and I stopped our crafts and helped them back onto their kayaks, even though it was not our job to do so. All in the name of good sportsmanship and what not, right?

However, there were plenty of others before us, who just went past them, without even helping them or staying with them till a rescue boat came to help them - especially when they capsized in the middle of the channel, where other motorized boats frequented. Such incidents, which I reflected on, brought me back to Friday's post on the mother with two daughters and the other passengers on the train to the left and right of them. I mean, considering their age, they might perhaps been privileged to sit on those seats. However, after sitting on the seats so long, shouldn't they realize that there is this mother of two, struggling to balance herself while keeping an eye on her two daughters? Although they are that privileged, shouldn't they give their seat up after they have rested like from City Hall to Bedok, to the poor mother, who ultimately resulted to squatting (you can see her in the bottom right of the picture) to ensure that she can balance herself (lower CG) and take care of her two daughters?

What these two incidents have in common is that these other people, when seeing others who obviously needed help of which they had the capabilities and/or resources to do, merely sat there staring at the situation - somewhat being unable to react to it, or too self-centered to do so?

I don't really know. After all, I am in no position to judge or tell them on what they should or rather, shouldn't be doing, or how they should react. Is this a sign of being self-centered, or just conflicting morals and belief in the right of way?

Wednesday, November 21, 2007

Week 5 (19 Nov - 23 Nov)

Monday, 19 November 2007

Well, I returned to the office to play with the edge blending code completed last week and to my surprise, there is a secret memory leak in the program. Which is strange, because I had already done quite a few rounds of code walk-through and double-checking if I had freed all the pointers that are used in the process of the program. Well, except for two, which are used as function parameters. And when I released them when the function was done, the memory leak disappeared - which lead me to think, are declaring pointers in the function's parameters also creating new pointers in the system? Weren't they just used as a name or unique identifier within the method and not actual pointers? hmmm..

Anyway, with edge blending more or less done, it was time to move on to read up on on circle detection, and some code which I downloaded of the OpenCV Yahoo! Groups. Quite an interesting read, should be fun to implement.

We also went down to Beyond Social Service, a charity cum social home for unfortunate children who are faced with family issues that are beyond their control. We collected cards that asked them what they wanted for X'mas as well as their ambition when they grew up. IHPC is having a charity event on 18 December for these children. Kevin has roped us in to do a game station. JL and my station will definitely be an interesting one :)




Tuesday, 20 November 2007

Well, perhaps not as interesting as I had thought - still trying to understand the code from the Yahoo! Groups and reading up on the various methods used within the code, such as the CvBox2D and CvFindContours. The documentation is pretty abstract and I believe implicitly assumes that the developer already has background in this area of image/object recognition.

But nevertheless, today is spent more on reading and understanding and trying out.

To summarize today's post, the reason why I am researching on this circle/shape detection is because we are hoping to convert lightdraw into a POC multi-player real-time game where interaction between the various players and the computer takes place in a form of light trails. Hopefully it works :)



Wednesday, 21 November 2007

Yea! Manage to get the code working. Or at least to a certain extent. Had ported my mini-sandbox over to the actual project and it works, though a bit too well for its own good. Now, even curved straight lines are detected as a circle and some circles which end prematurely are not detected as circles. I spent the better half of the day playing around with the variables and adding an additional cvDilate to smoothen out the light outline - but the cvFindContour method does not seem to corporate and gives strange results.

Later in the day, was researching more about circle/shape detection and quite a number of sites mentioned about Hough transformation and how useful it is in helping the program use predefined points to conclude if the image shape that it takes in is that of a circle/square/cross, etc.

Not too sure on how it should work. Perhaps tomorrow will be another reading up day. :)



Thursday, 22 November 2007

We started off the day with the morning at the library as there was a teleconference going on in the Cove. We updated our journal and were tasked to help with IHPC's D&D gift wrapping. It was pretty interesting to help out in some of their various activities and along the way, I learnt how to tie a ribbon. :)

Back to lunch, then to work. Had a nice lunch at Business school canteen, generously sponsored by Kevin, our cool supervisor in IHPC. Worked a bit more on the circle detection and improved the code. Initially, the code based the circle on two points - the starting and ending - along the edge of the contour and decided if their distance was close enough to be considered a fully closed shape, which a circle is then drawn around it. This was under the assumption that the two points obtained would be of the starting and ending. However, the OpenCV library does it in such a way that the last point of the array is actually a random point close to the starting - making it hard to decide for which contour is a closed shape, and for which, isn't.

Thus I experimented a while here and there and discovered a more accurate algorithm. Well, I won't say its the best, and I'm open to feedback on better ones :). Here goes nothing - what the program now does is that it takes two sets of points on the opposite side of the contour and calculate the distance between the two points. A circle, give and take, will return two distances which are quite close (within 20 units difference). Another non-circle shape, such as a large oval or broken circle, will result in the distances being rather far apart (more than 20 units away).

And on some circles which are not properly captured by the camera, the two distances between the circle can be compared with the diameter of the circle which the program wants to draw onto the screen. If any of the distances are quite close, the circle is drawn, else not.

I have not actually implemented the code onto the C file yet, as I had taken over JL's combined code on edge blending and trailing to see if I can make it more OO and check for any errors such as pointers left behind. Well, I double checked the program and removed all the pointers. However, a memory leak still occurs. After mistaking "size + 1" as the size variable incrementing by 1, I decided to call it a day. We finally left the office at 6.45pm.



Friday, 23 November 2007

My life revolves around circles, or so it seems - circle detection for the whole week now. When I try to recall what I ate yesterday for lunch, or what date was it on Monday, I cannot seem to recall. Everyday seems to have new things to do, new challenges to overcome that it feels that time just flies by. *flies*

Anyway, I've increased the number of point pairs to 3 (6 points in total) , and the average of these three compared to the circle computed for increased accuracy. Pretty okay, just that when some circles are not really complete circles, but nevertheless are still circles, the software does not detect them.

Maybe I shall post up a few pics of the circle detection some other time. Anyway, today is dinner and dance for the company, hope everyone enjoys themselves with all the games, prizes and dinner.



Reflection for the Week:

Lightdraw seems to be making healthy progress for a month now, in my humble opinion. However, there are lots of other things to be done to improve the software. Trailing and edge blending have been integrated into a single file and we have swapped our gcc compiler to a gpp compiler on Tuesday to make use of data structures such as queues as C does not support it.

With circle detection more or less there, the next step would be collision detection with rendered shapes on the screen. The POC game which involves multiple players playing simultaneously and interactively on an improved piece of lightdraw code will hopefully be something achievable within the next week.

Other than that, I'm beginning to get more or less used to the traveling to work each day and even found an alternative route to work in the event I miss the bus I usually take one street away. However, I will not get ample sleep on both bus journeys as they are rather short to sleep on. Since the time spent on traveling each day to work and back home is long, I've learnt to make use of my time at work better, trying to spend my time there productively so that not a day goes by in waste. Hopefully, we can finish the project faster too, with some cool new add-ons.

Last thing I've learn this week from various incidents - email is a voiceless and toneless medium of communication via plain text. Words are taken at face value unless elaborated, so I must remember to choose wisely, or elaborate more. :)

Monday, November 12, 2007

Week 4 (12 Nov - 16 Nov)

Monday, 12 November 2007
Heroes get remembered. Legends live forever.
Pondering, pondering and pondering. Together with Monday Blues. Have put aside the first and only calibration of the system after a bit more tinkering and am now working on a nicer light effect, akin to Mac OS X's screensaver. But the cvSmooth and cvDilate does not seem to be doing the job - it makes the output, if anything, much more blur than ever before and mixing the colours to give me purple for colours like my green shirt and hand. I think OpenCV has reached its limit in this aspect. Perhaps research have to be done for OpenGL?

And, how can a camera tell the difference between a white shirt and a white light? After thinking about it on my way home via an alternative route which let me sleep quite a bit, here are my thoughts:

A camera captures what it sees on a 2d pane. Light reflects off the objects and into the lens. A white shirt and a white light will ideally give a value of 255 for all RGB values. Which means, as far as the camera is concerned, it will be 255 for both objects.

Well, okay - it is arguable that the light's outer ring can be considered lesser than 255. and that the shirt will have creases and that its value will, too, be lesser than 255. But a shirt with a red and blue logo on it will not be detected by the pattern.

A work around is to not render anything that is white, or has 255 values for its RGB. But when the user wears a white shirt, uses a white light, takes a portrait of himself with the whites of his eyes, etc - all these white areas will not be drawn, which is a problem. The user will then be limited to using a particular light/attire before being able to use the project. hmmz..


If you have any suggestions or inquiries, do leave a comment and we will take it from there.

Blogger reminder/tip:
Don't use the angular brackets - they will be mistaken for tags and the whole/portion of the blog content will disappear!


Tuesday, 13 November 2007
If the doors of perception were to be cleaned, man would see everything as it truly is.....Infinite.


The first half of the day was spent on just fixing errors from the code done yesterday as well as updating my SVN repository. During lunch break, JL came up with a possible idea of creating the light edge blending effect using the cvCanny method.

After lunch, spent the rest of the day working on the code for it. However, there seemed to be a problem with the 4 nested for-loops. Looks like I've gotta review my code tomorrow...



Wednesday, 14 November 2007

If life gave me lemons, I would begin to wonder if I was mislabeled in life's database as a lemon tree, instead of a human being.

Continued fixing the code from yesterday since I had left early. Had to manually walk through each step of the code to check for errors and mistakes. However, as the code was copy-pasted from many existing functions all over the place, it became quite unmanageable. Thus after lunch, I finally decided to rewrite the function from scratch.

Rewriting took some time, starting with pseudo-code, getting the nested for-loops to work, then inputting the logic chunk by chunk and compiling and running to see if it will give a Bus Error or segmentation fault.

It was only at 6pm sharp did I finally manage to get the code working the way I wanted to . However, a disappointing problem occurred - every time the camera, or sequence grabber, grabs a frame, and does cvCanny on it, the outlines that are drawn vary. The lines are drawn very randomly and the edge blending effect results in big square blocks which cannot be over-drawn.

When testing, the general shape is there, but the output appears to be light with black squares all around. Perhaps I shall play with the method variables tomorrow, increasing the threshold, etc. And also explore more alternatives.

Sidetrack - we installed Fedora 8 on two desktops which were much faster than the laptops we were initially issued. More responsive hopefully means more productive. Special thanks to Kevin for lending us his desktop and an unused desktop belonging to an empty desk.



Thursday, 15 November 2007
We should be happy with what we have. But if what we have is something that can be improved, we should strive for improvements, after all, many innovations would not have come about if their creators were happy with what they had.


Back to playing with the code to see how it can be improved. Did a couple of code walkthroughs but did not find anything missing. Why were the black squares appearing? Why does the blending not seem to be any different with the rest? Why was the program lagging even under normal usage?

At the end of the day, after much of my brain cells have died and almost didn't make it home, I realized this:
  1. Black squares appearing because of the number of pixels used to ' blend' the edges is too much. I used a factor of 10, which resulted in a 20 x 20 pixel blendings.
  2. The program was lagging due to the 4 nested for-loops which used up a lot of the computer's resources.
With some problems solved, I left for home at 7pm. There was still some questions left...



Friday, 16 November 2007
Walk where there is no path and leave a trail for others.


We had a demonstration for the ED in the morning, and he was quite with the work in my opinion.

After lunch, it was back to finding the solution to the problem of why the edge blending did not seem to work, and when it did, it duplicated itself thrice across the output strain.

Code walkthroughs are becoming part of the daily routine about now. Then all of a sudden, towards the end of the day, at around 5.50pm, I finally solved the problem.

The blending did not seem to work (and didn't actually work) because I was using the wrong image pointer to be used in the blending function with the original image, thus while I spent the better half of the day changing the parameters and blend factors, nothing seemed to work - because I was using the wrong pointer. diao'

Secondly, the edge blending repeated itself three times across the screen because I had mistakenly used the code for the wrong colour channel for the input and output for cvCanny and fade function. With these two problems fixed, the effects work fine, though not very obvious. Hopefully when the shutter speed works, the effort spent throughout the entire week will repay itself.



Reflection for the Week:
With 4 out of 5 days spent on getting the edge blending to work, it did feel very frustrating, especially during midweek, when my code simply failed to work. Using all the debugging skills I had picked up from school, I finally manage to solve the mystery on the last day of the week. Perhaps from this mini episode, it dawned upon me that when faced with difficulty and obstacles to which seems no end, instead of calling it quits, continue to overcome the obstacles one by one and they will soon be over.

I have also learnt on the job about setting objectives and goals and handling my own expectations. On some days, I had to limit myself to reaching a target (such as the code can make successfully) and calling it a day. If I did not do so, I would probably have spent much more time at work and probably heading home much later. Well, not that I am trying to be damn lazy or showing off what a workaholic I am, but with a clear goal in mind, and a full night of rest; the following day, productivity will be increased and I will be able to progress further.


Well, come to think of it, this concludes my first month at A*Star IHPC. In just 4 weeks, I have learnt many new lessons from various experiences which many of it cannot be taught within a classroom setting. Initially, I was worried about the long traveling distances and unfamiliar language and development environments I will be using. But as long as we are passionate and interested in what we do, coupled the willingness to learn, it isn't too hard after all. :)

Looking forward to the next week after a weekend of (insufficient) rest. :)

Cheers

Thursday, November 8, 2007

Week 3 (5 Nov - 9 Nov)

Monday, 5 November 2007
The day started with us sharing talking about our (JL and mine) integrated code for the application which is the basis of what we will be working on in the future.

It seemed pretty strange to talk to people more experienced and senior than me as colleagues - but I guess that will be pretty often soon huh? Anyway, the code was so hacked that it took quite a while to re-understanding the logic behind it and I explained several things wrongly in the first place. When I realized that the code workings were wrong, I had to rewrite what I had explained. Oh well. After reviewing the product of our work, Kevin and CT deduced some areas of improvement and assigned the inverting and resizing of the image to me and JL.

Next, to facilitate future developers and the rest of our team, we had to draw up a document to talk about the workings of the code (call it a documentation if you want, but its a pretty informal document). And this is the first time I used OpenOffice 2.2 for official purposes. :)

While JL did the initial documentation, I reviewed the code logic, changed variable names and put in more meaningful comments within the code so that people were able to understand it with ease. Well, looks like those points in the assignment specs are integrated in my mind.

Kevin and CT were still setting up the SVN thingy, so we proceeded with our tasks. After rewriting the code, I volunteered to help with the second part of the documentation while JL worked on the code. When I had finished the documentation, he had already finished both tasks for improving the system. Team work, yea!

Soon, CT and Kevin had successfully set up the SVN. The SVN rides on the system's user authentication system and is now using the Mac's SSH as the primary means for authentication. Kevin and CT walked us through on how to use the SVN's basic functions such as checking out, updating, committing, etc.

We ended the day with the two new improvements complete and documentation finished, uploaded to the SVN.




Tuesday, 6 November 2007
Since we had finished the two areas of improvement, we worked on the next one, which is trailing. As the light is drawn, after X amount of time, the images are supposed to fade away, leaving the canvas empty.

Kevin also invited his friend/ex-IHPC employee to talk to us more about Carbon and Cocoa, and how the use of Quartz Composer is able to help improve our coding quality and time. I learnt quite a lot in that short talk about the back end libraries of Mac OS X and the porting over from Carbon to Cocoa. Personally, I feel that Quartz Composer is akin to Expression Blend, but for the Mac - comment me if I'm wrong.

After which, at around 4pm, Dr Eng dropped by to visit us. Really great to see a familiar TP face around. We showed him the project progress as well as the 2x3 tile display.




Wednesday, 7 November 2007
Still working on Tuesday's task.. Very tricky..




Thursday, 8 November 2007
There is no work today as its a public holiday. A day of rest and to catch up on my other things.

"Give the man a break and his productivity will be doubled. Work him through and he will have none"



Friday, 9 November 2007

I arrived at work to find the MakeFile for the project updated - as usual, I had forgotten to SVN Update my project folder. I've gotta remember that next time! Anyway, CT managed to create a dynamic Makefile, which had the capabilities to make various source codes and put their binary output files into the /bin folder automatically. This is great cause now we can have various versions of the code with different names and still use the same MakeFile to compile it.

The last day of the week. Again it is very interesting, with two major improvements done, basically the trailing Effect by JL (its really cool) and first calibration by me (well, not so cool). The first calibration seems to be malfunctioning. The I double checked the code, it seemed to work well, but when on the Mac and the iSight, it didn't really make a difference.

The question was - how do you determine the difference between a white shirt and a white (ceiling/torch) light? If both of them will result in RGB/Lab values of 255, what is the differing nature?

JL said to ask the camera to touch the white object. Interesting thought. Looks like there can be more research about it coming up. Time to combine code on Monday... not?

*lost in thought*



Reflection for the Week:
Week 3 passed equally fast as well. Pretty soon, I'll end up having worked at IHPC for a month now. Lightdraw project seems to be going well, and the various people we meet and the additional things we do outside of the Lightdraw project is fun and entertaining - such as going to talks, exploring NUS, new eating places, etc.

I have also more or less adopted better time management to juggle between work and my other commitments, as well as learnt how to use my time spent on traveling more effectively such as sleeping or reading some things.

Fedora 8 has been released. JL has burnt a disc and is installing on his laptop. I am tempted to do so, but my current laptop has insufficient memory at the moment, with all the stuff that I do. Its hard to focus and specialize in more than one area of technology - confusing/conflicting markers is not the main problem. Its more like the time spent developing in each area is not equal. Perhaps the first step is to set up VNC at home and be able to VNC to the computer in office. (haha)

Last thing I have learnt - The walls have ears, and even if there are no walls, the ears still exist.

Monday, October 29, 2007

Week 2 (29 Oct - 2 Nov)

Monday, 29 October 2007

Finally, we are down to work. We started the day with Requirements Gathering for the project. Kevin, CT, JL and I pooled our brain power together to think about what system requirements are there for the project. There were many interesting suggestions and potential problems were brought up. Debating and possible solutions were thrown in and we finally narrowed down the tentative specificiations for now.

After lunch at NUS, CT and I were tasked to do image processing from a camera with consideration to light luminance tolerance. Making use of the OpenCV library, CT had already done a mock up and I reviewed his code for the rest of the afternoon. I realised that it is possible to write single if-else statements (argc == 2 ? argv[1][0] - '0' : 0 ).

Apart from that, I also recapped CT's lesson on colour channels and bit depth of each channel. And how to iterate through the rows and columns of the picture (in a way, pixiels) by adding bytes and channels to the initial starting value of the Image structure.

And my favourite site for today:
http://www.cprogramming.com/

And this is what the COVE stands for:


Somehow, I feel like I have seen this before in my dreams..




Tuesday, 30 October 2007

The code analyzing continued today. I had to manually walk through each step of the code to clarify my doubts and understandings on just less than 20 lines of code. I also implemented the idea on using the same image as the input for the output against a dark background within a certain threshold.

Not bad for a day's work I must say. But the program is still suspectable to memory leaks and Segmentation Faults. Hopefully I will be able to solve these two problems before I have to leave for school tomorrow. And that dynamic threshold thingy too. But won't the light pixels get affected by the dynamic threshold too? Questions aplenty, but answers only tomorrow, I shall sleep away the wait.

We also found out the reason on why the computer has only one dual output graphics card. But the dog and pony shows are interesting nonetheless.

And hopefully the .jp servers are up and running on my ride to school :)


Blogger Discovery
Btw, I realized that in compose mode, if there are any angular brackets, they are mistaken as tags. I had written 'less than sign' 20, but it got mistaken as a tag and my entry went missing.




Wednesday, 31 October 2007

Got to work right on time today. Since I had to take 1/2 day off to go back to TP to talk about the MSP program to the top 20% of junior year, I thought I'll be extra hardworking.

I got in and immediately started work on the dynamic threshold. Managed to implement a working prototype but the camera was constantly out of focused. Fearing that I've destroyed the iSight, CT came to my rescue and proved that it was more of the coding than the hardware which caused the problem.

Apparently, by using a same pointer to the image to do multiple things, the image got distorted. When I duplicated the image to another pointer, the image sharpened back to normal. Strange huh. I also solved the segmentation error with this change of code. The segmentation error came about when the camera attempted to read/write from/to a nonexistent or protected memory location or attempted to take two screen shots without first clearing the first one. With the duplicate pointer, the segmentation fault disappeared :)

However, the code still suffers from memory leaks. I wonder where the leak is coming from..? Perhaps that is another story to tell on Thursday.



Thursday, 1 November 2007

I came to the office in the morning to a pleasant surprise. CT had already fixed the memory leak problem and the program was working fine. It was able to detect the various light sources and show it up on the screen based on the threshold set.

JL also managed to blend his images together, though there is a bit of a problem - when the images were too bright, darker images were not able to be blended by the camera. Nevertheless, it was really cool to see his blending in action. Next, Kevin asked us to combine our work together. Initially, we had hit a dead end - because we had customized our code so much, it was kind of hard to integrate both pieces of code together without first understanding the algorithm and workings of the other. We decided to break for lunch at NUS for the beautiful view and food. After lunch, we continue to tackle the problem and we finally managed to combine the code, after being encumbered by memory leaks and lacking of pointers.

Though light sources can now be retained on the canvas, we are now looking into improving the algorithm in terms of blending the light together.



Friday, 2 November 2007
Get up, get up morning. Good morning, Keep on Movin'

Just a phrase from a song that got stuck in my head when I woke up. Right now, taking some time during work to blog about yesterday and today. Oh well. Shall update this part a bit later.

Later is actually on Monday of the following week - I'm making the update 3 days late. Oh well.

What can be said about Friday? Hmmz.. We finally realised why the light halos were appearing - the threshold for the light was too low. Increasing the threshold works, but because light conditions may change during actual deployment, I was thinking of having a slider/button and a method to reconfigure the dynamic threshold as and when required. We also tried to play around with some methods to blend the light sources together so that it creates better light blending, but some of the methods made the program run slower though.

Apart from that, we also celebrated Hari Raya and Deepavali at IHPC. Finally getting to see everyone who works in IHPC (about 170). We had a presentation on each of the festivals and some hands on activities related to the festival (such as Ketupat folding). After which, we were treated to some standard catered food and we talked to Kevin and Harold about our views on software piracy and open sourced. The short talk opened my mind to broader perceptions and values over this controversial topic and the advantages vs disadvantages of using either.

With that, we finally ended the day. I stayed back a bit to make some of the blending and threshold work for the project, but was unable to accomplish anything except the realization of the reason behind the light halo.

Lets hope the next week will be more interesting. Oh and hope Kevin gets well really soon :)




Reflection for the Week:

Working with people on new projects is very fun. Especially when everyone on the team is committed and light hearted. This milestone has given me a really good hands on opportunity in coding with C and OpenCV libraries on Linux and Mac OS X. Nevertheless, there is still lots to learn and understand. There are many times where we have to take the initiative to improve on our working prototypes without being told to do so.

After finishing my part on Wednesday and combining the code on Thursday, I was tempted to just stop doing as I have reached the intended end goal. Why bother about some minor flaws when the main functionalities are there? This is typical in some projects where after fulfilling the requirements of a certain grade, I would stop and focus my time on other projects. But I decided not to waste my time at the COVE waiting for time to pass and work to end, but instead to improve on the prototype to make it better.

When working in a new environment, I learnt that there are many things which I had taken for granted back in school (I mentioned this previously). Many times, we have to be extra careful so that we do not offend anyone with our decisions or do things that gives us a bad impression. Also, the familiarity of my teammate's perception of things as well as their proficiencies in development makes me take for granted their skill set.

Nevertheless, I see all of this as part and parcel of a working life - a valuable experience to prepare me better for my future job. In fact, I am already (hopefully) adapting to it.

On a more relaxing mood, check out this really cool meeting room located in Biopolis. Would be cool to have a meeting overlooking the city.



See you next week.

Saturday, October 27, 2007

Week 1 (22 Oct 26 Oct)

Monday, 22 October 2007
Our reporting time was 9am. But we arrived slightly later as we got lost en route. Missing the Shell petrol station which the email indicated for us to look out for (we only noticed it at the end of the day on the way home), we managed to get into IHPC at around 9.20am.

We were introduced to Mr Kevin Veragoo and Cheng Teng, the two other people who are attached to our project. Nothing much can be said about the project I suppose, as a NDA-like document has been signed.

After a tour of the office and being introduced to each member of the AC department, we visited the brain of the network powering IHPC. With computers stacked in racks cooled by powerful air-conditioning, they hummed softly in contrast to their computational intensive processes which ran within them.

Lastly, we were introduced to the COVE, where we will be spending the next 16 weeks there. As well as our two laptops, which were were not in the best condition in terms of capacity and looks, but nevertheless functional. Mr Kevin and Cheng Teng ran through with us the project, refurnishing and being slightly more detailed as Mr Lim's description back in the meeting room.

Being introduced to Fedora was something similar to being back in OPSY labs with Mr. Lai. I believe that he will be pleased to know that I still remember that the "Command Prompt" is referred to as "Terminal" in Linux. :) [But please do not tell him that that is the only thing i remember from his class]

It was like learning how to walk again, learning to ropes of BASH commands and the GUI of Fedora 7. The final machine where the project will be implemented is on a Mac OS X (Tiger) output on 4 projectors on a screen.

We had lunch with the rest of the guys of AC department and the two admin clerks at the food court opposite Ginza plaza with the help of the free lunch shuttle. Modest meal of $2 chicken rice. Oh well.

The rest of the day was spent getting lost in Fedora.




Tuesday, 23 October 2007
We got our own user accounts on the Mac computer. Cheng Teng was kind enough to teach us how to SSH and VNC to the Mac computer. After which, we were informed that for the project, we will be using the OpenCV library, which Cheng Teng also did a mini-lecture about the functionality of the library for us and the rest we had to find out by ourselves. [Even Kevin was a student of CT's lecture, this goes to show that learning never stops] :)


So, left to our own devices, we surfed around the net to download the OpenCV libraries and installed them onto our machines using the ./configure, make and (sudo) make install (and sudo ldconfig for machine wide availability done in root account).

Apart from that, Kevin has also mentioned that while CT and himself were used to doing C programming on simple text editors (these guys are the pros!), we could make use of IDEs which we were more familiar. With that in mind, we downloaded Eclipse CDT version, only to realise that it needed Java JRE to run as well.

In all my life, I have only installed programs via a .exe icon. But that day, I had to install Java's JRE via command line (Terminal). After much fumbling around, I managed to get it installed. (during the rest of my attachment, I realized that what I had just done was an oh-so-simple everyday thing to do in a linux environment.


With IDE and OpenCV, we were challenged to write a simple HelloWorld program in C. Using cut and paste of course! :)

#include <stdio.h>
int main()
{
printf( "Hello World!" );
return 0;
}


But cut and paste aside, this was one of our first programs written in C, a programming language totally not taught in school but used in the industry for its speed and stability.

Some links used today:
http://opencvlibrary.sourceforge.net/InstallGuide_Linux
http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html




Wednesday , 24 October 2007

I still remember Chewy saying this line when I was attached to the MIC:
What better way to learn something than diving deep into it and getting your hands all dirty?

Too true indeed, jumping right into the fray with guns drawn was probably the best way to learn about OpenCV and C Programming (save for the ponytails and backpack in Tomb Raider). Unfortunately, before we could do so, I had cleverly tripped the power of the room, triggering two circuit breakers in the process. Thankfully, the "circuit master" of the place managed to get things back and running in no time and we resumed our progress.

The rest of the day, though short, was rather interesting as we played with OpenCV and a webcam, which I used to take a screen shot of JL, my SIP partner, and myself. :)


Another interesting thing learnt is Mac's MVC model used for its desktop - Closing a window only minimizes it, but the process is still running in the background (similar to minimize button for windows) the program's window will disappear but its process will appear on the dock [task bar equivalent] in the form of an icon). Quitting the application will terminate the task, similar to 'closing' the application in windows.


Homework for today:
Reading about CVS vs SVN

Notes to myself:
/usr/local/include/opencv and /usr/local (place where OpenCV libraries are stored)
/etc/ld.config (file to add in the directory where OpenCV files are)




Thursday, 25 October 2007

Everyday we learn so many new things.

Today, we were given a talk by Kevin about SSH vs Telnet and the differences between them. After which, we were given more time to find out more about CVS and SVN. Kevin brought us to lunch today at NUS followed by a talk today about "The Right to Privacy and Personal Data Protection in Singapore" given by a fellow A*Star employee in the Biomedical side (I believe), which talked a bit about the Odex case, which is of interest to us as IT people. Interesting and educational.

Back at COVE, JL's C program could not seem to run and with Kevin's help, we managed to identify that he was missing some devel (pronounced as "devil", but means developer) tools to compile his codes. We were then introduced to the BASH command "yum install package_name" to download the necessary packages with the help of PBone to define the exact package name.

After which, we played around a little more with SVN and learnt more about KDE and Gnone Windows Manager in Linux and their history on how it evolved.

Homework:
Finding out how the IDE Eclipse created Makefile and compiled C files




Friday, 26 October 2007
A great mystery indeed, about Makefiles. We had deduced that Makefiles were generated dynamically and automatically by the IDE, choosing to distribute its Makefile components into 3 other files which were included into the main Makefile. After much trial and error, as well as analyzing the sub-makefiles, we all (Kevin, JL and myself) managed to create our own Makefiles and port it over to the Mac, but not before CT had already beaten us to it. :)

After lunch, we watched "Pirates of the Silicon Valley" [basically a story about how Bill Gates and Steve Jobs started off their companies]. An interesting watch as it talked in detail about the two men, their perception of things and evolving with changes, and making changes, in the technology sector.

After which, it was back to Fedora land. I Landed myself in a KDE IRC Channel #KDE where people went to chat about KDE problems with technical people there to help baffled users.


Something interesting: I found out that with the ability to stretch the vector-based file icons on the desktop, it will actually show the first few lines of content within the icon (see picture below). Pretty cool huh?






Reflection for the week:

When I look back at the week, it all seems to pass so fast and during this short 5 days, I have learnt so much things, perhaps not as detailed as some of my other modules in school, which are not taught in school.

I can still remember my CVS lab in school, where due to a technical fault in the CVS setup, we were unable to finish the lab. Losing valuable experience in understanding and getting hands on about how CVS worked. But CVS was still covered in the lecture, right?

Many times during the week at IHPC, I had thanked my teachers for teaching a bit here and there about some things, such as Fedora, CVS and Bash Commands (mkdir, rm, ls, etc) which I found very useful. Though by now the knowledge would have been slightly outdated, they were nevertheless still useful in some way. I also found out that somethings that I have learnt during my participations in competitions paid off, such as the use of Eclipse, open libraries and some command line commands (such as ipconfig in linux is /sbin/ifconfig).

On the other hand, I had also wished that I had learnt more during my school semesters - about SSH/Telnet, SVN, C Programming, OpenCV, just to name a few. But as Mr. Yeak has mentioned before, we are not taught about a specific topic in school; we are taught how to learn independently. And in many situations of my friends in other attachments where they are tasked to do things unrelated to what is taught in school, though they complain (we all do, right?) but all of us learn well.

:)