Heroes get remembered. Legends live forever.Pondering, pondering and pondering. Together with Monday Blues. Have put aside the first and only calibration of the system after a bit more tinkering and am now working on a nicer light effect, akin to Mac OS X's screensaver. But the cvSmooth and cvDilate does not seem to be doing the job - it makes the output, if anything, much more blur than ever before and mixing the colours to give me purple for colours like my green shirt and hand. I think OpenCV has reached its limit in this aspect. Perhaps research have to be done for OpenGL?
And, how can a camera tell the difference between a white shirt and a white light? After thinking about it on my way home via an alternative route which let me sleep quite a bit, here are my thoughts:
A camera captures what it sees on a 2d pane. Light reflects off the objects and into the lens. A white shirt and a white light will ideally give a value of 255 for all RGB values. Which means, as far as the camera is concerned, it will be 255 for both objects.
Well, okay - it is arguable that the light's outer ring can be considered lesser than 255. and that the shirt will have creases and that its value will, too, be lesser than 255. But a shirt with a red and blue logo on it will not be detected by the pattern.
A work around is to not render anything that is white, or has 255 values for its RGB. But when the user wears a white shirt, uses a white light, takes a portrait of himself with the whites of his eyes, etc - all these white areas will not be drawn, which is a problem. The user will then be limited to using a particular light/attire before being able to use the project. hmmz..
If you have any suggestions or inquiries, do leave a comment and we will take it from there.
Blogger reminder/tip:
Don't use the angular brackets - they will be mistaken for tags and the whole/portion of the blog content will disappear!
Tuesday, 13 November 2007
If the doors of perception were to be cleaned, man would see everything as it truly is.....Infinite.
The first half of the day was spent on just fixing errors from the code done yesterday as well as updating my SVN repository. During lunch break, JL came up with a possible idea of creating the light edge blending effect using the cvCanny method.
After lunch, spent the rest of the day working on the code for it. However, there seemed to be a problem with the 4 nested for-loops. Looks like I've gotta review my code tomorrow...
Wednesday, 14 November 2007
If life gave me lemons, I would begin to wonder if I was mislabeled in life's database as a lemon tree, instead of a human being.
Continued fixing the code from yesterday since I had left early. Had to manually walk through each step of the code to check for errors and mistakes. However, as the code was copy-pasted from many existing functions all over the place, it became quite unmanageable. Thus after lunch, I finally decided to rewrite the function from scratch.
Rewriting took some time, starting with pseudo-code, getting the nested for-loops to work, then inputting the logic chunk by chunk and compiling and running to see if it will give a Bus Error or segmentation fault.
It was only at 6pm sharp did I finally manage to get the code working the way I wanted to . However, a disappointing problem occurred - every time the camera, or sequence grabber, grabs a frame, and does cvCanny on it, the outlines that are drawn vary. The lines are drawn very randomly and the edge blending effect results in big square blocks which cannot be over-drawn.
When testing, the general shape is there, but the output appears to be light with black squares all around. Perhaps I shall play with the method variables tomorrow, increasing the threshold, etc. And also explore more alternatives.
Sidetrack - we installed Fedora 8 on two desktops which were much faster than the laptops we were initially issued. More responsive hopefully means more productive. Special thanks to Kevin for lending us his desktop and an unused desktop belonging to an empty desk.
Thursday, 15 November 2007
We should be happy with what we have. But if what we have is something that can be improved, we should strive for improvements, after all, many innovations would not have come about if their creators were happy with what they had.
Back to playing with the code to see how it can be improved. Did a couple of code walkthroughs but did not find anything missing. Why were the black squares appearing? Why does the blending not seem to be any different with the rest? Why was the program lagging even under normal usage?
At the end of the day, after much of my brain cells have died and almost didn't make it home, I realized this:
- Black squares appearing because of the number of pixels used to ' blend' the edges is too much. I used a factor of 10, which resulted in a 20 x 20 pixel blendings.
- The program was lagging due to the 4 nested for-loops which used up a lot of the computer's resources.
Friday, 16 November 2007
Walk where there is no path and leave a trail for others.
We had a demonstration for the ED in the morning, and he was quite with the work in my opinion.
After lunch, it was back to finding the solution to the problem of why the edge blending did not seem to work, and when it did, it duplicated itself thrice across the output strain.
Code walkthroughs are becoming part of the daily routine about now. Then all of a sudden, towards the end of the day, at around 5.50pm, I finally solved the problem.
The blending did not seem to work (and didn't actually work) because I was using the wrong image pointer to be used in the blending function with the original image, thus while I spent the better half of the day changing the parameters and blend factors, nothing seemed to work - because I was using the wrong pointer. diao'
Secondly, the edge blending repeated itself three times across the screen because I had mistakenly used the code for the wrong colour channel for the input and output for cvCanny and fade function. With these two problems fixed, the effects work fine, though not very obvious. Hopefully when the shutter speed works, the effort spent throughout the entire week will repay itself.
Reflection for the Week:
With 4 out of 5 days spent on getting the edge blending to work, it did feel very frustrating, especially during midweek, when my code simply failed to work. Using all the debugging skills I had picked up from school, I finally manage to solve the mystery on the last day of the week. Perhaps from this mini episode, it dawned upon me that when faced with difficulty and obstacles to which seems no end, instead of calling it quits, continue to overcome the obstacles one by one and they will soon be over.
I have also learnt on the job about setting objectives and goals and handling my own expectations. On some days, I had to limit myself to reaching a target (such as the code can make successfully) and calling it a day. If I did not do so, I would probably have spent much more time at work and probably heading home much later. Well, not that I am trying to be damn lazy or showing off what a workaholic I am, but with a clear goal in mind, and a full night of rest; the following day, productivity will be increased and I will be able to progress further.
Well, come to think of it, this concludes my first month at A*Star IHPC. In just 4 weeks, I have learnt many new lessons from various experiences which many of it cannot be taught within a classroom setting. Initially, I was worried about the long traveling distances and unfamiliar language and development environments I will be using. But as long as we are passionate and interested in what we do, coupled the willingness to learn, it isn't too hard after all. :)
Looking forward to the next week after a weekend of (insufficient) rest. :)
Cheers
No comments:
Post a Comment