Thinking of you makes me smile inside.

Or if I happen to be outside, I smile there too.

The Viennese Ball last Friday was awesome! Joanna and I tripled dated with her mom, Roger, Erc, and Kejia. Erc found us “the best restaurant in Burlingame”, where we enjoyed a very very fancy dinner. Roger was astounded by their tome-sized wine list, which had more than 6000 options. Ridiculous! Having had joked with Erc earlier about duck being the thing you get at fancy restaurants, we had to have duck. We also ordered desert – I got the mocha tort which was presented in the Art Nouveau style, as you can see in the picture below.

The Viennese Ball is a Stanford tradition and is a large event held at the Hyatt Regency hotel in Burlingame. There are two large rooms for dancing (waltz and swing), both with live bands. Waltzing is my core competency, as I have not taken a swing class in almost a year. In the lobby, there are all sorts of deserts and punches for sending ballgoers into insulin shock.


Erc and Kejia in their finery


Jo’s Mommypuff and Roger


Woooooo… dizzy!


I’m told that I clean up well

Enjoy yourself (It's later than you think)

((our CS143 project, that is))

The weekend before last Erc, Kej, Doougle and I spontaneously decided to go hiking in the foothills. Erc took us to this great place up on skyline. Best use of a lateday ever!


Lovely California


Doougle in deep contemplation of natural beauty


Can you find the Asians in this picture?


Hoover Tower looks different up here


Beginnings of sunset


Best lateday ever!

Singular Value Decomposition of the Soul

Watching all the CS223B lectures again before the midterm tomorrow. I noticed while doing so that in one of them Sebastian Thrun conclusively resolved the age old question of which came first: the chicken or the egg.

The egg came first because there were reptiles before there were chickens.

Today is the day

Current bid: US $16.00

Time left: 9 hours 51 mins
10-day listing, Ends Feb-17-06 22:55:49 PST
Start time: Feb-07-06 22:55:49 PST
History: 17 bids (US $0.01 starting bid)
High bidder: lrsgmc47 ( 8 )

Evil Killer Robot Auction Views: 2604

The CS247 Experience

My upcoming book will teach you:

The 7 principles for success in design
The 5 guiding forces of human computer interaction
The 10 evaluation heuristics relevant in HCI
The 3 things you need to know about interfaces
The 2 informance mechanisms of dynamic prototyping
The 8 top causes of design breakdown
The 13 best ways of presenting errors
The 9 precepts of human cognition
The 34 human senses involved in the intuitive actovaluetron
The 56 tradeoffs of rapid consistency engineering
The 100 reasons why HCI has nothing to do with computer science

Embrace the Void*

Mama they try and break me

The window burns to light the way back home
A light that warms no matter where they’ve gone

They’re off to find the hero of the day
But what if they should fall by someone’s wicked way

Still the window burns
Time so slowly turns
And someone there is sighing
Keepers of the flames
Do you feel your name?
Did you hear your babies crying?

Mama they try and break me
Still they try and break me

‘Scuse me while I tend to how to feel
These things return to me that still seem real

Now deservingly this easy chair
But the rocking stopped by wheels of despair

Don’t want your aid
But the fist I made
For years, can’t hold or feal
No not on me
So please excuse me
While I tend to how I feel

But now the dreams and waking screams
That everlast the night
So build a wall
Behind it crawl
And hide until it’s light
So can’t you hear your babies crying now?

Still the window burns
Time so slowly turns
And someone there is sighing
Keepers of the flames
Can’t you hear your name?
Did you hear your babies crying?

But now the dreams and waking screams
That everlast the night
So build a wall
Behind it crawl
And hide until it’s light
So can’t you hear your babies crying now?

Mama they try and break me
Mama they try and break me
Mama they try and break me
Mama they try
Mama they try
Mama they try and break me
Mama they try and break me
Mama they try and break me
Mama they try
Mama they try



int opticalFlow(IplImage *frame,IplImage *next_frame)
{
static const int NUM_FEATURES = 20000;

cvNamedWindow(“Optical Flow”,1);
IplImage *frame1 = NULL, *frame1_1C = NULL, *frame2_1C = NULL, *eig_image = NULL, *temp_image = NULL, *pyramid1 = NULL, *pyramid2 = NULL;

image_Gray = cvCreateImage(cvSize(frame->width,frame->height), IPL_DEPTH_8U, 1);

cvCvtColor(frame, image_Gray, CV_BGR2GRAY);
eig_image = cvCreateImage(cvSize(frame->width,frame->height), IPL_DEPTH_32F, 1);
// frame1 = cvCreateImage(cvSize(frame->width,frame->height), IPL_DEPTH_8U, 1);
frame1 = cvCloneImage(frame);
//cvCopy(frame,frame1);
temp_image = cvCreateImage(cvSize(frame->width,frame->height), IPL_DEPTH_32F, 1);
/* Go to the frame we want. Important if multiple frames are queried in
* the loop which they of course are for optical flow. Note that the very
* first call to this is actually not needed. (Because the correct position
* is set outsite the for() loop.)
*/
// cvSetCaptureProperty( input_video, CV_CAP_PROP_POS_FRAMES, current_frame );

/* Get the next frame of the video.
* IMPORTANT! cvQueryFrame() always returns a pointer to the _same_
* memory location. So successive calls:
* frame1 = cvQueryFrame();
* frame2 = cvQueryFrame();
* frame3 = cvQueryFrame();
* will result in (frame1 == frame2 && frame2 == frame3) being true.
* The solution is to make a copy of the cvQueryFrame() output.
*/
//frame = cvQueryFrame( input_video );
if (frame == NULL)
{
/* Why did we get a NULL frame? We shouldn’t be at the end. */
fprintf(stderr, “Error: Hmm. The end came sooner than we thought.\n”);
return -1;
}
/* Allocate another image if not already allocated.
* Image has ONE channel of color (ie: monochrome) with 8-bit “color” depth.
* This is the image format OpenCV algorithms actually operate on (mostly).
*/
// allocateOnDemand( &frame1_1C, frame_size, IPL_DEPTH_8U, 1 );
/* Convert whatever the AVI image format is into OpenCV’s preferred format.
* AND flip the image vertically. Flip is a shameless hack. OpenCV reads
* in AVIs upside-down by default. (No comment :-))
*/
// cvConvertImage(frame, frame1_1C, CV_CVTIMG_FLIP);

/* We’ll make a full color backup of this frame so that we can draw on it.
* (It’s not the best idea to draw on the static memory space of cvQueryFrame().)
*/
//allocateOnDemand( &frame1, frame_size, IPL_DEPTH_8U, 3 );
// cvConvertImage(frame, frame1, CV_CVTIMG_FLIP);

/* Get the second frame of video. Sample principles as the first. */
//frame = cvQueryFrame( input_video );
/* if (frame == NULL)
{
fprintf(stderr, “Error: Hmm. The end came sooner than we thought.\n”);
return -1;
}*/
// allocateOnDemand( &frame2_1C, frame_size, IPL_DEPTH_8U, 1 );
// cvConvertImage(frame, frame2_1C, CV_CVTIMG_FLIP);

/* Shi and Tomasi Feature Tracking! */

/* Preparation: Allocate the necessary storage. */
// allocateOnDemand( &eig_image, frame_size, IPL_DEPTH_32F, 1 );
// allocateOnDemand( &temp_image, frame_size, IPL_DEPTH_32F, 1 );

/* Preparation: This array will contain the features found in frame 1. */
CvPoint2D32f frame1_features[NUM_FEATURES];

/* Preparation: BEFORE the function call this variable is the array size
* (or the maximum number of features to find). AFTER the function call
* this variable is the number of features actually found.
*/
int number_of_features;

/* I’m hardcoding this at 400. But you should make this a #define so that you can
* change the number of features you use for an accuracy/speed tradeoff analysis.
*/
number_of_features = NUM_FEATURES;

/* Actually run the Shi and Tomasi algorithm!!
* “frame1_1C” is the input image.
* “eig_image” and “temp_image” are just workspace for the algorithm.
* The first “.01″ specifies the minimum quality of the features (based on the eigenvalues).
* The second “.01″ specifies the minimum Euclidean distance between features.
* “NULL” means use the entire input image. You could point to a part of the image.
* WHEN THE ALGORITHM RETURNS:
* “frame1_features” will contain the feature points.
* “number_of_features” will be set to a value */
cvGoodFeaturesToTrack(image_Gray, eig_image, temp_image, frame1_features, &number_of_features, .01, .01, NULL,5);

/* Pyramidal Lucas Kanade Optical Flow! */

/* This array will contain the locations of the points from frame 1 in frame 2. */
CvPoint2D32f frame2_features[NUM_FEATURES];

/* The i-th element of this array will be non-zero if and only if the i-th feature of
* frame 1 was found in frame 2.
*/
char optical_flow_found_feature[NUM_FEATURES];

/* The i-th element of this array is the error in the optical flow for the i-th feature
* of frame1 as found in frame 2. If the i-th feature was not found (see the array above)
* I think the i-th entry in this array is undefined.
*/
float optical_flow_feature_error[NUM_FEATURES];

/* This is the window size to use to avoid the aperture problem (see slide “Optical Flow: Overview”). */
CvSize optical_flow_window = cvSize(3,3);

/* This termination criteria tells the algorithm to stop when it has either done 20 iterations or when
* epsilon is better than .3. You can play with these parameters for speed vs. accuracy but these values
* work pretty well in many situations.
*/
CvTermCriteria optical_flow_termination_criteria
= cvTermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, .3 );

/* This is some workspace for the algorithm.
* (The algorithm actually carves the image into pyramids of different resolutions.)
*/
pyramid1 = cvCreateImage(cvSize(frame->width,frame->height), IPL_DEPTH_8U, 1);
pyramid2 = cvCreateImage(cvSize(frame->widt
h,frame->height), IPL_DEPTH_8U, 1);
// allocateOnDemand( &pyramid1, frame_size, IPL_DEPTH_8U, 1 );
// allocateOnDemand( &pyramid2, frame_size, IPL_DEPTH_8U, 1 );

/* Actually run Pyramidal Lucas Kanade Optical Flow!!
* “frame1_1C” is the first frame with the known features.
* “frame2_1C” is the second frame where we want to find the first frame’s features.
* “pyramid1″ and “pyramid2″ are workspace for the algorithm.
* “frame1_features” are the features from the first frame.
* “frame2_features” is the (outputted) locations of those features in the second frame.
* “number_of_features” is the number of features in the frame1_features array.
* “optical_flow_window” is the size of the window to use to avoid the aperture problem.
* “5” is the maximum number of pyramids to use. 0 would be just one level.
* “optical_flow_found_feature” is as described above (non-zero iff feature found by the flow).
* “optical_flow_feature_error” is as described above (error in the flow for this feature).
* “optical_flow_termination_criteria” is as described above (how long the algorithm should look).
* “0” means disable enhancements. (For example, the second array isn’t pre-initialized with guesses.)
*/
cvCalcOpticalFlowPyrLK(image_Gray, next_frame, pyramid1, pyramid2, frame1_features, frame2_features, number_of_features, optical_flow_window, 5, optical_flow_found_feature, optical_flow_feature_error, optical_flow_termination_criteria, 0 );

// Cluster points array
//CvMat* points = cvCreateMat( number_of_features, 1, CV_32FC2 );
//CvMat* clusters = cvCreateMat( number_of_features, 1, CV_32SC1 );

CvPoint* green_pts = (CvPoint*)malloc( number_of_features * sizeof(CvPoint));

memset(green_pts,-1,sizeof(CvPoint)*number_of_features);

CvMat* left_hull = cvCreateMat( number_of_features, 1, CV_32FC2 );
CvMat* right_hull = cvCreateMat( number_of_features, 1, CV_32FC2 );

int left_hull_count = 0;
int right_hull_count = 0;

//CvPoint* cluster_pts = (CvPoint*)malloc( number_of_features * sizeof(cluster_pts[0]));

int* lhull = (int*)malloc( number_of_features * sizeof(int));
int* rhull = (int*)malloc( number_of_features * sizeof(int));
CvMat left_mat = cvMat( 1, number_of_features, CV_32SC1, lhull );
CvMat right_mat = cvMat( 1, number_of_features, CV_32SC1, rhull );

/* For fun (and debugging :)), let’s draw the flow field. */
for(int i = 0; i {
/* If Pyramidal Lucas Kanade didn’t really find the feature, skip it. */
if ( optical_flow_found_feature[i] == 0 ) continue;

int line_thickness; line_thickness = 1;
/* CV_RGB(red, green, blue) is the red, green, and blue components
* of the color you want, each out of 255.
*/
CvScalar line_color; line_color = CV_RGB(255,0,0);

/* Let’s make the flow field look nice with arrows. */

/* The arrows will be a bit too short for a nice visualization because of the high framerate
* (ie: there’s not much motion between the frames). So let’s lengthen them by a factor of 3.
*/
CvPoint p,q;
p.x = (int) frame1_features[i].x;
p.y = (int) frame1_features[i].y;
q.x = (int) frame2_features[i].x;
q.y = (int) frame2_features[i].y;

double angle; angle = atan2( (double) p.y – q.y, (double) p.x – q.x );
double hypotenuse; hypotenuse = sqrt( square(p.y – q.y) + square(p.x – q.x) );

//if (p.y

CvPoint v; // v is the vanishing point (empirically measured to be at (360, 130) – this is a dirty hack. Can we use Hough to find it?
v.x = 360;
v.y = 130;

double vangle;
vangle = atan2((double) p.y – v.y, (double)p.x – v.x);

// Filter out all displacement vectors not within an angle threshold of pi/6 from p to vanishing point
// Tried experimenting with the threshold, pi/6 seems to do an ok job of weeding out stopped cars in
// the distance in clip 2.

// Ultimately, we will take our haar-boxes and count the average amplitude of the vectors that
// pass this filter to estimate the car’s velocity.

if(abs(vangle – angle) {
line_color = CV_RGB(0,255,0);

if(hypotenuse > 5)
{
// load features vectors into points list

green_pts[i].x = frame1_features[i].x;
green_pts[i].y = frame1_features[i].y;
}
else
{
green_pts[i].x = -1;
}

}
else
{
green_pts[i].x = -1;
}

if(abs((vangle + pi) – angle) {
// this object is coming towards us
// based on some formula involving the distance to the vanishing point and the
// vector magnitude, we will either classify the object as moving or not

// high magnitude = moving towards us

// medium magnitude = stationary

}

// do not allow cars above vanishing point
if(p.y

/* Here we lengthen the arrow by a factor of three. */
q.x = (int) (p.x – 3 * hypotenuse * cos(angle));
q.y = (int) (p.y – 3 * hypotenuse * sin(angle));

/* Now we draw the main line of the arrow. */
/* “frame1″ is the frame to draw on.
* “p” is the point where the line begins.
* “q” is the point where the line stops.
* “CV_AA” means antialiased drawing.
* “0” means no fractional bits in the center cooridinate or radius.
*/
cvLine( frame1, p, q, line_color, line_thickness, CV_AA, 0 );
/* Now draw the tips of the arrow. I do some scaling so that the
* tips look proportional to the main line of the arrow.
*/
p.x = (int) (q.x + 9 * cos(angle + pi / 4));
p.y = (int) (q.y + 9 * sin(angle + pi / 4));
// cvLine( frame1, p, q, line_color, line_thickness, CV_AA, 0 );
p.x = (int) (q.x + 9 * cos(angle – pi / 4));
p.y = (int) (q.y + 9 * sin(angle – pi / 4));
// cvLine( frame1, p, q, line_color, line_thickness, CV_AA, 0 );
}

right_hull_count = 0;
left_hull_count = 0;

for(int i = 0; i {
// draw clusters for sanity

if(green_pts[i].x
CvScalar line_color; line_color = CV_RGB(0,0,0);

if(green_pts[i].x width / 3)
{
cvSet1D(left_hull, left_hull_count, cvScalar(green_pts[i].x, green_pts[i].y,0,0));
left_hull_count++;
line_color = CV_RGB(0,128,255);
}
if(green_pts[i].x > (frame1->width * 2) / 3)
{
cvSet1D(right_hull, right_hull_count, cvScalar(green_pts[i].x, green_pts[i].y,0,0));
right_hull_count++;
line_color = CV_RGB(255,128,0);
}

cvCircle(frame1, green_pts[i], 3 , line_color, CV_FILLED);

}

printf(“left hull count: %d\n”, left_hull_count);
printf(“right hull count: %d\n”,right_hull_count);

// left_hull and right_hull now contain the points we think belong to passing cars

cvConvexHull2(left_hull, &left_mat, CV_CLOCKWISE, 0);
cvConvexHull2(right_hull, &right_mat, CV_CLOCKWISE, 0);

printf(“left convex hull num pts: %d\n”, left_mat.cols);
printf(“right convex hull num pts: %d\n”, right_mat.cols);

CvPoint poly_pts[NUM_FEATURES];

if(left_hull_count > 5)
{
CvPoint pt0;
pt0.x = cvGet1D(left_hull, lhull[left_mat.cols-1]).val[0];
pt0.y = cvGet1D(left_hull, lhull[left_mat.cols-1]).val[1];
// draw hull on features img
for(int i = 0; i {
CvPoint pt;
CvScalar s = cvGet1D(left_hull, lhull[i]);
pt.x = cvGet1D(left_hull, lhull[i]).val[0];
pt.y = cvGet1D(left_hull, lhull[
i]).val[1];

//poly_pts[i] = pt;

cvLine(frame1, pt0, pt, CV_RGB(0,0,0));
pt0 = pt;
}
}

if(right_hull_count > 5)
{
CvPoint pt0;
pt0.x = cvGet1D(right_hull, rhull[right_mat.cols-1]).val[0];
pt0.y = cvGet1D(right_hull, rhull[right_mat.cols-1]).val[1];
// draw hull on features img
for(int i = 1; i {
CvPoint pt;
CvScalar s = cvGet1D(right_hull, rhull[i]);
pt.x = cvGet1D(right_hull, rhull[i]).val[0];
pt.y = cvGet1D(right_hull, rhull[i]).val[1];

//poly_pts[i] = pt;

cvLine(frame1, pt0, pt, CV_RGB(0,0,0));
pt0 = pt;
}
}

// rasterize answers
//cvFillConvexPoly(answer, poly_pts, hull_count, CV_RGB(k+1,k+1,k+1));

/* Now display the image we drew on. Recall that “Optical Flow” is the name of
* the window we created above.
*/
cvShowImage(“Optical Flow”, frame1);

cvReleaseImage(&frame1);
cvReleaseImage(&frame1_1C);
cvReleaseImage(&frame2_1C);
cvReleaseImage(&eig_image);
cvReleaseImage(&temp_image);
cvReleaseImage(&pyramid1);
cvReleaseImage(&pyramid2);
/* And wait for the user to press a key (so the user has time to look at the image).
* If the argument is 0 then it waits forever otherwise it waits that number of milliseconds.
* The return value is the key the user pressed.
*/
//cvDestroyWindow(“Optical Flow”);
}

Engines of Commerce

On Feb-15-06 at 15:55:19 PST, seller added the following information:

PS – Serious bidders only please. I have to pay ebay a percentage of the winning bid, so if you don’t pay I will have to report you as a non-paying bidder to avoid fees. Thanks.

Evil Killer Robot Auction Views: 2346