Parser Planning Based on Chosen Benchmarks

Fabiha Hannan » 04 February 2014

Since Paul is now working on a different project, Sami, the newest member, and I are working on taking the work we've done regarding benchmarks and using it to make a parser. Most of our meeting included refreshing my memory and catching Sami up on everything. Sami also may have a parser in Python that we can base our parser off of. Essentially, we want to tweak it to parse out the values needed to calculate the metris for a benchmark program. Afterwards, I worked more on looking at minutes I took for the last meeting of last semester (on...
Read more...

minutesSpringOne

Akhil Bagaria » 28 January 2014

Spring 2014 Meeting One We have a machine on which user accounts need to be made. Email prof spjut preferred username. Machine can run CUDA, so GPGPU is good. Need a way to parse output from GP-GPU sim. Sphyinx is not going to be using Tera anymore, but will use charlab.eng.hmc.edu Running example code on new machine. Need to write python scripts to parse output. Need to think about the benchmark programs again as none of them ran on Tera. Someone needs to go through the output and figure out what we need from there. Just look at output file...
Read more...

Expanding on Spock

Eric Storm » 21 January 2014

Over break I spent more time familiarizing myself with the codebase for Spock. There are already traces of simulations for the NAS Parallel Computing Benchmarks on tera, so there is already a lot of data to work with. Unfortunately, tera only has about 200 GB of free space so it is not possible to store all of the data on tera simultaneously. As a result, I ran the simulations a few at a time and copied over the final plots to my computer before deleting the data. I was able to run full simulation for 13 programs with 6 different...
Read more...

Sphynx Project Work Plan

Dong-hyeon Park » 28 December 2013

Here is a general work plan for Sphynx project, with the final goal being a paper ready by end of April. I think we could have two groups working in parallel; One group working on setting up the simulator and running the test, while the other writes scripts to extract the result and analyze it. Deadline Duration Task #1 Task #2 Jan. 27 ~3 weeks Pick Simulator & Machine Pick Benchmarks and Test Programs Feb. 17 3 weeks Try Different Hardware Config Write Scripts to Extract Simulation Results Mar. 17 4 weeks Simulate Simple Cache Config Process and Analyze Result...
Read more...

Exploring Alternatives: Multi2Sim

Dong-hyeon Park » 13 December 2013

Prof. Spjut suggested looking into Multi2Sim CPU/GPU simulator to see if that will work better than GPGPU-sim. Getting Multi2Sim to install and run in Tera was very simple, and you can run it using the command m2s. There are some example codes found inside Sphynx group directory (that i just created) at /proj/sphynx/multi2sim/multi2sim-4.2/samples/. Look at their Manual for more details. Looking at the manual, setting up different memory hierarchy for different configurations of CPU/GPU system seems fairly simple and straightforward. Making changes to the architecture seems to involve lot more work, but that shouldn't be a problem for us. The...
Read more...

Starting Spock

Eric Storm » 08 December 2013

I spend most of this week familiarizing myself with the concept and implementation of the project. I read over the paper that was submitted as well as the reviews of the paper. In general the paper is lacking in thoroughness as well as any useful conclusions. I don't currently have access to the spock folder on tera, so I pulled the repository from GitHub. I looked through the makefile and all of the code and have a rudimentary understanding of what the programs are doing (unfortunately, they are almost entirely uncommented). I was able to successfully run the make file...
Read more...

Benchmark Research

Fabiha Hannan » 08 December 2013

This week, Paul and I looked at the benchmark that we have been able to successfully run so far, the Coulombic Potential (CP) benchmark. We measured values for the metrics we deemed important at last week's meeting. Here are the benchmarks we looked at at last week's meeting, along with the number indicating importance from 1 to 10: Instructions loaded per cycle - 5 Average instruction read time – 3 Stalls due to instruction cache – 10 Instruction Duplication – 8 (important but potentially difficult to measure) Average time to recache – low; ignored for now (paper not published for...
Read more...

Paper Rejection

Josef Spjut » 02 December 2013

So the first submission of the paper for the spock project has almost officially been rejected. The reviewers heavily lean towards rejection in their scores and I can't say I blame them. I would have made quite a few changes to the paper before sending the final version for publication if it had been accepted. However, I believe this rejection is an important experience for the (former) students involed in the process. First, it is encouraging that the reviewer were very constructive in their feedback. Almost universally they said the idea has novelty and potentially has merit, though we haven't...
Read more...