MATLAB Rubrics | EF105 - Ef.engr.
Maybe your like
- Login
- Announcements
- Calendar
- Syllabus
- Instructors/GTAs
- Section Info
- Procedures
- Prep Quizzes
- Labs
- Drop Box
- Datasets
- Tutorials
- Examples
- Help!
- Exams
- Concepts
- Exercises
- Computer Setup
- Zoom Recordings
- FAQ
- Discussion Board
- OIT
- Calculator Tips
- Physics Review
- EF 105 Academic Integrity
- resource test
- MATLAB FAQ
- MATLAB Rubrics
- MATLAB Grader
MATLAB scripts submitted for labs will be scored two different ways which will be listed on your grade sheet separately:
- Human scoring for assessment of communication in code and results. Both the code itself and the output of the script when run serve to communicate the details of a problem solution to humans. We will be reading your scripts in this context and do a single round of human scoring at some point after the official due date.
- Computer scoring for assessment of completeness and numeric correctness of results. Code must also be understandable by a computer, and to be useful must be able to run and produce numerically correct results. This assessment can be done automatically and will be re-run each morning up until the next Lab day. For example, Lab 9 files will be due Friday, April 2, the computer-scoring will run each morning starting Tuesday March 30 through Tuesday, April 6.
- The daily run tests are not a substitute for you to run your code on your own and self-evaluate the results before submitting!
- If your code runs without error for you, but there are errors reported when it runs on the server it is most likely due to an incorrect path to a data file given.
Human Scored Items
In general, when reading code as a human we would like it to clearly communicate the solution to the given problem in a way that makes it easy to verify that it will calculate what it claims. In addition to the three scoring levels of "Good", "Developing", and "Needs Improvement" described below, each of the items will also have an "Insufficient to Score" level that we will use in the event that what is submitted is incomplete to a degree that prevents us from reasonably assessing a given item.
Organization
| Good | Developing | Beginning |
|---|---|---|
| Code is organized into logical sections, code associated with a particular section is fully contained in that section. | Logical sections are included, and most code is organized into sections according to its role. | Logical sections not included, or code does not appear to be organized according to its role. |
Readability
Readability is a measure of how easy it is for a human to read the text of your code. Like any written document, readability is improved by consistent use of whitespace, punctuation, and capitalization. Readability applies both to comments and code.
| Good | Developing | Beginning |
|---|---|---|
| Consistent use of whitespace between logical chunks of code, between operators and parenthesis. Consistent formatting of comments. | Minor formatting inconsistencies that may have negative impact on the ease of reading and understanding the code. | Significant formatting inconsistencies that interfere with reading and understanding the code. |
Context
| Good | Developing | Beginning |
|---|---|---|
| Variable names and comments consistently use language related to the context of the problem and solution. | Most variable names and comments use language related to the context of the problem and solution | Few variable names or few comments use language related to the context of the problem and solution. |
Output and Figures
| Good | Developing | Beginning |
|---|---|---|
| If requested, only output related to the final result of the solution is printed to the command window. | Some output unrelated to the final solution is printed to the command window | Much of the output in the command window is not related to the final solution |
| Figures use markers to show measured data, lines to show smoothed data | ||
| Data plotted look consistent with the problem | Data look mostly consistent with the problem, e.g. axis may be inverted | Difficult to tell if data plotted are consistent with the problem |
| All data required to understand the solution is plotted | Some data required to understand the solution is missing | Much |
- We consistently see a single space after the '%' starting a comment and the text of the comment.
- Same-line comments for a code chunk, e.g. lines 10-13 are formatted so that they all align.
- All variable names consistently use camelCase (starts with a lower case letter, first letter of each word is upper case).
- Another valid strategy is to use underscores to denote spaces between words in variable names, .e.g. mass_car. Pick one style or the other, but do not mix both in the same script, that would make the code less readable.
- All variable names consistently start with what is being measured/reported ('mass'), and then the name of the object for which that measure belongs, e.g. 'car'. It would also be fine to reverse this, e.g. carMass, as long as this is consistently applied for all variable names. Mixing these two conventions in the same script would make the code less readable.
- Because the letter 'g' is widely used to denote the acceleration due to gravity of the local planet (usually Earth), a single letter variable name is sufficient context for this measure. On the other hand, although 'm' is widely used to denote the mass of something in a physics problem, it could be the mass of anything: a car, a train, Dr. Bennett, etc. so we need a more descriptive variable name that directly ties this value to the context of the problem.
- A single empty line is used to delineate related chunks of code within each section, as well as the end of one section and start of the next.
- Language in the variable names and comments is consistent with language of the problem and solution: It is apparent from the code that the problem involves cars and looped tracks, and that the solution is organized around the concept of conservation of energy.
Organization
- The calculation of the height of the loop from the initialized value is included on line 15, in the initialize section. This section is meant to group lines that initialize values from the outside world and that may change for different problem setups. The height of the loop is not such a value, since it is calculated from another value.
Readability
- Lines 10-13: same-line comments are not aligned with one another. Try to align same-line line comments with the others in the same code chunk.
- Lines 17-33: No empty lines between
- Lines 36-37: intermediate calculation stored to kineticEnergy, but then this value is never used, instead the calculation is repeated in line 37.
Context
Beginning clear clc %% Initialize g = 9.81; % acceleration due to gravity (m/s^2) r = 0.96; % radius of loop (m) v = sqrt(g*r); % take the square root of g times r. h = 2*r; %% Calculate % m*g*h will be non-zero, since h is greater than 0: % 0.5*k*x^2 = 0.5*m*v^2 + m*g*h' % solving for x... m = 0.3; k = 1560; x = sqrt((m*v^2+2*m*g*h)/k)- Code is not organized completely into sections
- No blank lines to help separate chunks of code and sections
- Generic variable names
Organization
Since this code is incomplete, we can not assess its organization since no code other than initialization code was included.
Readability
We can still assess the readability of what is included. This example would be considered readable. If only a single value was initialized, we would not be able to asses readability.
Context
We can still assess the context of what is included. If no code, or no additional comments beyond what was included in the template were included, we would not be able to score for context.
Computer Scored Items
Output
Errors
In most cases we are expecting there to be no errors in submitted code. Code that contains syntax errors cannot be run at all and so no additional tests on variables or figures will take place. In the case of runtime errors, the code up to the location where the error occurred will run and any workspace variables that are present will be tested for correctness.
Workspace
We use the workspace to check the numerical correctness of intermediate and final results. It is important to follow naming rules for variables that will be checked for correctness. Correct calculations assigned to variables named differently than expected will not be detected as correct results!
Figures
We make use of the named figure handles in the workspace to check structural elements of requested figures. This is why it is important to
Tag » Code V Matlab
-
Matlab-CodeV Toolkit - NASA - GSFC Open Source Software
-
CODE V To Matlab Extensions - Optica Publishing Group
-
MATLAB-Code V Toolkit(GSC-15140-1) | NASA Software Catalog
-
COM Interfaces In Task Parallel - - MathWorks
-
CODE V To Matlab Extensions - ResearchGate
-
Matlab Code V. 1.0 To Climate Of The Past 9 (5): 2232-2252
-
[PDF] CODE V COM Interface
-
2D-DIC Challenge MATLAB Analysis Code V. 1.0 - OSTI.GOV
-
[PDF] Exact Ray Tracing In MATLAB
-
Matlab Code - An Overview | ScienceDirect Topics
-
MATLAB - Wikipedia
-
VCS MATLAB/Simulink Native DPI Flow - Synopsys
-
[PDF] INTRODUCTION TO MATLAB FOR ENGINEERING STUDENTS


