Wednesday, November 28, 2007

Week 6 (26 Nov - 30 Nov)

Monday, 26 November 2007
I wonder why people call it Monday blues? There was not a single outstanding blue colour in today. Nevertheless, I woke up feeling like I need more sleep. Anyway, back to work on week 6 - effectively making it 10 more weeks to go.

The day started with polishing up circle detection and updating it into the SVN repository. However, a slight glitch ended up with our main file, lightdraw.c being updated with a out-of-date file, which we suspect is suffering from a memory leak. Thankfully, we had the working copy of the code located in someone's computer and managed to revert it. Anyway, SVN has the ability to revert the code to a previous revision in the event of such an accident from happening. Anyway, yea.. that's about it for the morning.

In the afternoon, we visited Dell Computers and talked to their MD with regard to loaning a computer with 2 dual output graphic cards (4 monitors). Its specs are good, with it being able to support a game rendered across three screens as demo-ed in their lab. The reason why they had lent it to us was so that we are able to harness its power to create a multi-screen software to run on its 4 monitors, showing that the ability of what 4 monitors can do, as compared to one. Nevertheless, we brought it back to the office and hooked the system up, but not before looking at its innards.


A cool piece of hardware, hopefully we'll make full and good use of it during its precious time with us. :) With the addition of the new machine, we have also partly reworked the layout of the office to cater to the addition to our new addition to the family. With that, we ended another exciting day at IHPC.

Heard that most people enjoyed themselves at the company's dinner and dance. Cool. Well done Bernard for winning the first prize!


Tuesday, 27 November 2007
Today was spent at CMPB doing my pre-enlistment medical check up. After 4 hours of checkup and doing the test, I am certified as Pes A L1. So is that a good thing or not? hmmz...



Wednesday, 28 November 2007
Today I started with square/rectangle detection. Initially had searched around for Hough Transformation, but its parameters had to be specially catered for each picture of different nature. For example, a picture of a building will have different parameters for thresholding passed into the method as compared to a picture of some bathroom tiles - and finding the right combination of parameters was no easy task. At least to me.

Anyway, after that, we revisited circle detection, with some brainstorming on the various rules we would like to set for the game. JL did a very good job in creating a square on the black canvas and the user/player, would have to draw a circle around it to make it disappear. Something simple for now, but I believe will get more complicated. However, the way which circle detection works for the software is quite strict on the way the circle is drawn. The circle's average diameter between 6 evenly spaced out points (3 pair points) had to be no more than X units different away than compared to the circle drawn by the computer.

On the way home, JL suggested that perhaps one area to look into is that to ensure that the contours are of a certain length before calculating if it is a full-fledged circle. Which might be a valid checking as if the user were to just flash the light once in the direction of the camper, the canny method will cause it to appear just like a contour, with the diameters being close to equal than that of the circle being drawn. Thus the loophole in the game was that if the player were to flash a light quick enough at the center of the square, it will take it that the player had drawn a circle around the square's center. A cheat, probably? :)

So to improve on the algorithm, we discussed about it for quite a while. Kevin suggested modifying the conditions such that they had to be of ratio less than X value before the shape is considered a complete one. However, drawing a long rectangle (somewhat similar to a light trail across the screen) would return false for this checking.

In theory, it will be unfeasible, but in practical implementation...?



Thursday, 29 November 2007
With circle detection being worked on by JL for collision detection ( or rather, encirclement detection ), I moved on to using another algorithm, as suggested by CT - Hough Transformation. Hough Transformation is a rather mathematical approach for detecting circles on the input image. However, it might have worked too well as even normal hand drawn circles were not detected as circles for its lack of appropriate curves of the arcs (or might be my handdrawing is lousy).

Also touched a bit on Polygon detection (but not squares, since squares were detected as circles as well as its average diameter and length of their sides are quite the same). Referenced code on the net and the sample OpenCV codes and edited them to my function. Again, worked really well, but perhaps too well as sometimes rectangles had a slight bump in them (due to shaky or crooked lines from the torchlight) which made the program think it was 5 or 6 corners instead of 4 corners.

Though for other polygons, such as trapeziums and rectangles, they worked fine with the code.

Anyway, Bernard had won the first price at the D&D - which was a Wii. We spent some time playing with it on Wed and Thurs to see what it was really like to play the Wii, after hearing so much about it. It was really cool, using the infrared and velocity estimation (i assume), it was able to detect how much force to use or what action to do during the game. Another great HCI as compared to the keyboard/mouse pair of the traditional gaming controller. However, after playing bowling for a while, my shoulder joint started to ache. Haha. Guess its time for me to get more in shape, by bowling? :)

Anyway, here is one of our Mii characters. I won't say who it belongs to though...




Friday, 30 November 2007
Nothing much can be said about work today. Just did more on circle detection using Hough Transformation (cvHoughCircles). Read Thursday's post for more details.

After telling CT that Hough transformation was rather too accurate, he suggested Morphological Thinning, which meant thinning of the edges of the shape so that it would appear, ideally, as a single line 1 pixel wide. But the point was that the shape drawn was not linear enough to do thinning equally on all sides, as a result, some parts totally got thinned to nothing. Well, I guess there are some pros and cons to each methodology. Lastly, I did a combination of both techniques, but results were not as satisfactory. Maybe its some parameters I used?

Anyway, on the way back home, I witnessed a particular incident - a lady had boarded the train with her two young daughters - unfortunately, there were no free seats available at that time, but thankfully, a young man gave up his seat to her. Well, as all loving mothers would do, she let one of her daughters. The other, she had initially carried her other daughter, but I suppose she got a bit tired so she put her down in front of her older (i think) sister. The best part? The people sitting on the left and right of the mother and two daughters, did not budge. Instead, they just looked at them. Interesting huh?





Reflection of the Week:
Well well well. End of the 6th week, which means another 50 days to go (excluding the weekends of course).

With the nature of the company being a research-based one, there is usually no rush for datelines and/or bindings to specific tools. It was mentioned a couple of times - datelines are quite relaxed and there is the possibility of thinking "what-if" during the project, instead of "how-to".

I also had to keep on reminding myself that I should set a dateline and goal of what I wanted to achieve each day, so that I do not end up coding until 7pm-ish each day. With a proper goal in mind, I can work towards it with confidence and speed.

Last point regarding work, after a while, the objectives become less clear over time and milestones have to be set. Initially we were supposed to create nicer light effect (with edge blending), then work on the game, which is in progress. But because the "game" mode's details about gameplay etc is not really defined, we sometimes veer off course on what we should develop and research on, moving all the way to cross-detection and what not. As such, I found it useful to have mini milestones, even if it is an individual milestone to work on, so that I know what I am achieving or need to be achieved. Speaking of which, I better check with JL after I do the shape detection on what should the proposed specs be...


Another thing I noticed this week is about human nature (off topic on SIP). On Sunday, I attended a Kayaking race (or Canoing Race, if you prefer) and we had like 150+ participants. With many people being their first time attempting the race (me included), they were unsure of the currents and choppiness of the water out at sea, especially near the coastal edges of Ubin. And well, quite a number of them capsized. Even as participants in the race, my friends and I stopped our crafts and helped them back onto their kayaks, even though it was not our job to do so. All in the name of good sportsmanship and what not, right?

However, there were plenty of others before us, who just went past them, without even helping them or staying with them till a rescue boat came to help them - especially when they capsized in the middle of the channel, where other motorized boats frequented. Such incidents, which I reflected on, brought me back to Friday's post on the mother with two daughters and the other passengers on the train to the left and right of them. I mean, considering their age, they might perhaps been privileged to sit on those seats. However, after sitting on the seats so long, shouldn't they realize that there is this mother of two, struggling to balance herself while keeping an eye on her two daughters? Although they are that privileged, shouldn't they give their seat up after they have rested like from City Hall to Bedok, to the poor mother, who ultimately resulted to squatting (you can see her in the bottom right of the picture) to ensure that she can balance herself (lower CG) and take care of her two daughters?

What these two incidents have in common is that these other people, when seeing others who obviously needed help of which they had the capabilities and/or resources to do, merely sat there staring at the situation - somewhat being unable to react to it, or too self-centered to do so?

I don't really know. After all, I am in no position to judge or tell them on what they should or rather, shouldn't be doing, or how they should react. Is this a sign of being self-centered, or just conflicting morals and belief in the right of way?

Wednesday, November 21, 2007

Week 5 (19 Nov - 23 Nov)

Monday, 19 November 2007

Well, I returned to the office to play with the edge blending code completed last week and to my surprise, there is a secret memory leak in the program. Which is strange, because I had already done quite a few rounds of code walk-through and double-checking if I had freed all the pointers that are used in the process of the program. Well, except for two, which are used as function parameters. And when I released them when the function was done, the memory leak disappeared - which lead me to think, are declaring pointers in the function's parameters also creating new pointers in the system? Weren't they just used as a name or unique identifier within the method and not actual pointers? hmmm..

Anyway, with edge blending more or less done, it was time to move on to read up on on circle detection, and some code which I downloaded of the OpenCV Yahoo! Groups. Quite an interesting read, should be fun to implement.

We also went down to Beyond Social Service, a charity cum social home for unfortunate children who are faced with family issues that are beyond their control. We collected cards that asked them what they wanted for X'mas as well as their ambition when they grew up. IHPC is having a charity event on 18 December for these children. Kevin has roped us in to do a game station. JL and my station will definitely be an interesting one :)




Tuesday, 20 November 2007

Well, perhaps not as interesting as I had thought - still trying to understand the code from the Yahoo! Groups and reading up on the various methods used within the code, such as the CvBox2D and CvFindContours. The documentation is pretty abstract and I believe implicitly assumes that the developer already has background in this area of image/object recognition.

But nevertheless, today is spent more on reading and understanding and trying out.

To summarize today's post, the reason why I am researching on this circle/shape detection is because we are hoping to convert lightdraw into a POC multi-player real-time game where interaction between the various players and the computer takes place in a form of light trails. Hopefully it works :)



Wednesday, 21 November 2007

Yea! Manage to get the code working. Or at least to a certain extent. Had ported my mini-sandbox over to the actual project and it works, though a bit too well for its own good. Now, even curved straight lines are detected as a circle and some circles which end prematurely are not detected as circles. I spent the better half of the day playing around with the variables and adding an additional cvDilate to smoothen out the light outline - but the cvFindContour method does not seem to corporate and gives strange results.

Later in the day, was researching more about circle/shape detection and quite a number of sites mentioned about Hough transformation and how useful it is in helping the program use predefined points to conclude if the image shape that it takes in is that of a circle/square/cross, etc.

Not too sure on how it should work. Perhaps tomorrow will be another reading up day. :)



Thursday, 22 November 2007

We started off the day with the morning at the library as there was a teleconference going on in the Cove. We updated our journal and were tasked to help with IHPC's D&D gift wrapping. It was pretty interesting to help out in some of their various activities and along the way, I learnt how to tie a ribbon. :)

Back to lunch, then to work. Had a nice lunch at Business school canteen, generously sponsored by Kevin, our cool supervisor in IHPC. Worked a bit more on the circle detection and improved the code. Initially, the code based the circle on two points - the starting and ending - along the edge of the contour and decided if their distance was close enough to be considered a fully closed shape, which a circle is then drawn around it. This was under the assumption that the two points obtained would be of the starting and ending. However, the OpenCV library does it in such a way that the last point of the array is actually a random point close to the starting - making it hard to decide for which contour is a closed shape, and for which, isn't.

Thus I experimented a while here and there and discovered a more accurate algorithm. Well, I won't say its the best, and I'm open to feedback on better ones :). Here goes nothing - what the program now does is that it takes two sets of points on the opposite side of the contour and calculate the distance between the two points. A circle, give and take, will return two distances which are quite close (within 20 units difference). Another non-circle shape, such as a large oval or broken circle, will result in the distances being rather far apart (more than 20 units away).

And on some circles which are not properly captured by the camera, the two distances between the circle can be compared with the diameter of the circle which the program wants to draw onto the screen. If any of the distances are quite close, the circle is drawn, else not.

I have not actually implemented the code onto the C file yet, as I had taken over JL's combined code on edge blending and trailing to see if I can make it more OO and check for any errors such as pointers left behind. Well, I double checked the program and removed all the pointers. However, a memory leak still occurs. After mistaking "size + 1" as the size variable incrementing by 1, I decided to call it a day. We finally left the office at 6.45pm.



Friday, 23 November 2007

My life revolves around circles, or so it seems - circle detection for the whole week now. When I try to recall what I ate yesterday for lunch, or what date was it on Monday, I cannot seem to recall. Everyday seems to have new things to do, new challenges to overcome that it feels that time just flies by. *flies*

Anyway, I've increased the number of point pairs to 3 (6 points in total) , and the average of these three compared to the circle computed for increased accuracy. Pretty okay, just that when some circles are not really complete circles, but nevertheless are still circles, the software does not detect them.

Maybe I shall post up a few pics of the circle detection some other time. Anyway, today is dinner and dance for the company, hope everyone enjoys themselves with all the games, prizes and dinner.



Reflection for the Week:

Lightdraw seems to be making healthy progress for a month now, in my humble opinion. However, there are lots of other things to be done to improve the software. Trailing and edge blending have been integrated into a single file and we have swapped our gcc compiler to a gpp compiler on Tuesday to make use of data structures such as queues as C does not support it.

With circle detection more or less there, the next step would be collision detection with rendered shapes on the screen. The POC game which involves multiple players playing simultaneously and interactively on an improved piece of lightdraw code will hopefully be something achievable within the next week.

Other than that, I'm beginning to get more or less used to the traveling to work each day and even found an alternative route to work in the event I miss the bus I usually take one street away. However, I will not get ample sleep on both bus journeys as they are rather short to sleep on. Since the time spent on traveling each day to work and back home is long, I've learnt to make use of my time at work better, trying to spend my time there productively so that not a day goes by in waste. Hopefully, we can finish the project faster too, with some cool new add-ons.

Last thing I've learn this week from various incidents - email is a voiceless and toneless medium of communication via plain text. Words are taken at face value unless elaborated, so I must remember to choose wisely, or elaborate more. :)

Monday, November 12, 2007

Week 4 (12 Nov - 16 Nov)

Monday, 12 November 2007
Heroes get remembered. Legends live forever.
Pondering, pondering and pondering. Together with Monday Blues. Have put aside the first and only calibration of the system after a bit more tinkering and am now working on a nicer light effect, akin to Mac OS X's screensaver. But the cvSmooth and cvDilate does not seem to be doing the job - it makes the output, if anything, much more blur than ever before and mixing the colours to give me purple for colours like my green shirt and hand. I think OpenCV has reached its limit in this aspect. Perhaps research have to be done for OpenGL?

And, how can a camera tell the difference between a white shirt and a white light? After thinking about it on my way home via an alternative route which let me sleep quite a bit, here are my thoughts:

A camera captures what it sees on a 2d pane. Light reflects off the objects and into the lens. A white shirt and a white light will ideally give a value of 255 for all RGB values. Which means, as far as the camera is concerned, it will be 255 for both objects.

Well, okay - it is arguable that the light's outer ring can be considered lesser than 255. and that the shirt will have creases and that its value will, too, be lesser than 255. But a shirt with a red and blue logo on it will not be detected by the pattern.

A work around is to not render anything that is white, or has 255 values for its RGB. But when the user wears a white shirt, uses a white light, takes a portrait of himself with the whites of his eyes, etc - all these white areas will not be drawn, which is a problem. The user will then be limited to using a particular light/attire before being able to use the project. hmmz..


If you have any suggestions or inquiries, do leave a comment and we will take it from there.

Blogger reminder/tip:
Don't use the angular brackets - they will be mistaken for tags and the whole/portion of the blog content will disappear!


Tuesday, 13 November 2007
If the doors of perception were to be cleaned, man would see everything as it truly is.....Infinite.


The first half of the day was spent on just fixing errors from the code done yesterday as well as updating my SVN repository. During lunch break, JL came up with a possible idea of creating the light edge blending effect using the cvCanny method.

After lunch, spent the rest of the day working on the code for it. However, there seemed to be a problem with the 4 nested for-loops. Looks like I've gotta review my code tomorrow...



Wednesday, 14 November 2007

If life gave me lemons, I would begin to wonder if I was mislabeled in life's database as a lemon tree, instead of a human being.

Continued fixing the code from yesterday since I had left early. Had to manually walk through each step of the code to check for errors and mistakes. However, as the code was copy-pasted from many existing functions all over the place, it became quite unmanageable. Thus after lunch, I finally decided to rewrite the function from scratch.

Rewriting took some time, starting with pseudo-code, getting the nested for-loops to work, then inputting the logic chunk by chunk and compiling and running to see if it will give a Bus Error or segmentation fault.

It was only at 6pm sharp did I finally manage to get the code working the way I wanted to . However, a disappointing problem occurred - every time the camera, or sequence grabber, grabs a frame, and does cvCanny on it, the outlines that are drawn vary. The lines are drawn very randomly and the edge blending effect results in big square blocks which cannot be over-drawn.

When testing, the general shape is there, but the output appears to be light with black squares all around. Perhaps I shall play with the method variables tomorrow, increasing the threshold, etc. And also explore more alternatives.

Sidetrack - we installed Fedora 8 on two desktops which were much faster than the laptops we were initially issued. More responsive hopefully means more productive. Special thanks to Kevin for lending us his desktop and an unused desktop belonging to an empty desk.



Thursday, 15 November 2007
We should be happy with what we have. But if what we have is something that can be improved, we should strive for improvements, after all, many innovations would not have come about if their creators were happy with what they had.


Back to playing with the code to see how it can be improved. Did a couple of code walkthroughs but did not find anything missing. Why were the black squares appearing? Why does the blending not seem to be any different with the rest? Why was the program lagging even under normal usage?

At the end of the day, after much of my brain cells have died and almost didn't make it home, I realized this:
  1. Black squares appearing because of the number of pixels used to ' blend' the edges is too much. I used a factor of 10, which resulted in a 20 x 20 pixel blendings.
  2. The program was lagging due to the 4 nested for-loops which used up a lot of the computer's resources.
With some problems solved, I left for home at 7pm. There was still some questions left...



Friday, 16 November 2007
Walk where there is no path and leave a trail for others.


We had a demonstration for the ED in the morning, and he was quite with the work in my opinion.

After lunch, it was back to finding the solution to the problem of why the edge blending did not seem to work, and when it did, it duplicated itself thrice across the output strain.

Code walkthroughs are becoming part of the daily routine about now. Then all of a sudden, towards the end of the day, at around 5.50pm, I finally solved the problem.

The blending did not seem to work (and didn't actually work) because I was using the wrong image pointer to be used in the blending function with the original image, thus while I spent the better half of the day changing the parameters and blend factors, nothing seemed to work - because I was using the wrong pointer. diao'

Secondly, the edge blending repeated itself three times across the screen because I had mistakenly used the code for the wrong colour channel for the input and output for cvCanny and fade function. With these two problems fixed, the effects work fine, though not very obvious. Hopefully when the shutter speed works, the effort spent throughout the entire week will repay itself.



Reflection for the Week:
With 4 out of 5 days spent on getting the edge blending to work, it did feel very frustrating, especially during midweek, when my code simply failed to work. Using all the debugging skills I had picked up from school, I finally manage to solve the mystery on the last day of the week. Perhaps from this mini episode, it dawned upon me that when faced with difficulty and obstacles to which seems no end, instead of calling it quits, continue to overcome the obstacles one by one and they will soon be over.

I have also learnt on the job about setting objectives and goals and handling my own expectations. On some days, I had to limit myself to reaching a target (such as the code can make successfully) and calling it a day. If I did not do so, I would probably have spent much more time at work and probably heading home much later. Well, not that I am trying to be damn lazy or showing off what a workaholic I am, but with a clear goal in mind, and a full night of rest; the following day, productivity will be increased and I will be able to progress further.


Well, come to think of it, this concludes my first month at A*Star IHPC. In just 4 weeks, I have learnt many new lessons from various experiences which many of it cannot be taught within a classroom setting. Initially, I was worried about the long traveling distances and unfamiliar language and development environments I will be using. But as long as we are passionate and interested in what we do, coupled the willingness to learn, it isn't too hard after all. :)

Looking forward to the next week after a weekend of (insufficient) rest. :)

Cheers

Thursday, November 8, 2007

Week 3 (5 Nov - 9 Nov)

Monday, 5 November 2007
The day started with us sharing talking about our (JL and mine) integrated code for the application which is the basis of what we will be working on in the future.

It seemed pretty strange to talk to people more experienced and senior than me as colleagues - but I guess that will be pretty often soon huh? Anyway, the code was so hacked that it took quite a while to re-understanding the logic behind it and I explained several things wrongly in the first place. When I realized that the code workings were wrong, I had to rewrite what I had explained. Oh well. After reviewing the product of our work, Kevin and CT deduced some areas of improvement and assigned the inverting and resizing of the image to me and JL.

Next, to facilitate future developers and the rest of our team, we had to draw up a document to talk about the workings of the code (call it a documentation if you want, but its a pretty informal document). And this is the first time I used OpenOffice 2.2 for official purposes. :)

While JL did the initial documentation, I reviewed the code logic, changed variable names and put in more meaningful comments within the code so that people were able to understand it with ease. Well, looks like those points in the assignment specs are integrated in my mind.

Kevin and CT were still setting up the SVN thingy, so we proceeded with our tasks. After rewriting the code, I volunteered to help with the second part of the documentation while JL worked on the code. When I had finished the documentation, he had already finished both tasks for improving the system. Team work, yea!

Soon, CT and Kevin had successfully set up the SVN. The SVN rides on the system's user authentication system and is now using the Mac's SSH as the primary means for authentication. Kevin and CT walked us through on how to use the SVN's basic functions such as checking out, updating, committing, etc.

We ended the day with the two new improvements complete and documentation finished, uploaded to the SVN.




Tuesday, 6 November 2007
Since we had finished the two areas of improvement, we worked on the next one, which is trailing. As the light is drawn, after X amount of time, the images are supposed to fade away, leaving the canvas empty.

Kevin also invited his friend/ex-IHPC employee to talk to us more about Carbon and Cocoa, and how the use of Quartz Composer is able to help improve our coding quality and time. I learnt quite a lot in that short talk about the back end libraries of Mac OS X and the porting over from Carbon to Cocoa. Personally, I feel that Quartz Composer is akin to Expression Blend, but for the Mac - comment me if I'm wrong.

After which, at around 4pm, Dr Eng dropped by to visit us. Really great to see a familiar TP face around. We showed him the project progress as well as the 2x3 tile display.




Wednesday, 7 November 2007
Still working on Tuesday's task.. Very tricky..




Thursday, 8 November 2007
There is no work today as its a public holiday. A day of rest and to catch up on my other things.

"Give the man a break and his productivity will be doubled. Work him through and he will have none"



Friday, 9 November 2007

I arrived at work to find the MakeFile for the project updated - as usual, I had forgotten to SVN Update my project folder. I've gotta remember that next time! Anyway, CT managed to create a dynamic Makefile, which had the capabilities to make various source codes and put their binary output files into the /bin folder automatically. This is great cause now we can have various versions of the code with different names and still use the same MakeFile to compile it.

The last day of the week. Again it is very interesting, with two major improvements done, basically the trailing Effect by JL (its really cool) and first calibration by me (well, not so cool). The first calibration seems to be malfunctioning. The I double checked the code, it seemed to work well, but when on the Mac and the iSight, it didn't really make a difference.

The question was - how do you determine the difference between a white shirt and a white (ceiling/torch) light? If both of them will result in RGB/Lab values of 255, what is the differing nature?

JL said to ask the camera to touch the white object. Interesting thought. Looks like there can be more research about it coming up. Time to combine code on Monday... not?

*lost in thought*



Reflection for the Week:
Week 3 passed equally fast as well. Pretty soon, I'll end up having worked at IHPC for a month now. Lightdraw project seems to be going well, and the various people we meet and the additional things we do outside of the Lightdraw project is fun and entertaining - such as going to talks, exploring NUS, new eating places, etc.

I have also more or less adopted better time management to juggle between work and my other commitments, as well as learnt how to use my time spent on traveling more effectively such as sleeping or reading some things.

Fedora 8 has been released. JL has burnt a disc and is installing on his laptop. I am tempted to do so, but my current laptop has insufficient memory at the moment, with all the stuff that I do. Its hard to focus and specialize in more than one area of technology - confusing/conflicting markers is not the main problem. Its more like the time spent developing in each area is not equal. Perhaps the first step is to set up VNC at home and be able to VNC to the computer in office. (haha)

Last thing I have learnt - The walls have ears, and even if there are no walls, the ears still exist.