Skip to content
Snippets Groups Projects
Commit 71b4b626 authored by W. Spencer Smith's avatar W. Spencer Smith
Browse files

Removal of unit vnv plan document (relevant parts moved to vnv plan)

parent dd22211c
No related branches found
No related tags found
No related merge requests found
# Makefile
# From https://danielkaes.wordpress.com/2009/03/14/compiling-latex-documents-using-makefiles/
PROJECT=UnitVnVPlan
TEX=pdflatex
BIBTEX=bibtex
BUILDTEX=$(TEX) $(PROJECT).tex
all:
$(BUILDTEX)
$(BIBTEX) $(PROJECT)
$(BUILDTEX)
$(BUILDTEX)
clean-all:
rm -f *.dvi *.log *.bak *.aux *.bbl *.blg *.idx *.ps *.eps *.pdf *.toc *.out *~
clean:
rm -f *.log *.bak *.aux *.bbl *.blg *.idx *.toc *.out *.lof *.lot *~
\ No newline at end of file
# Test Plan
The folders and files for this folder are as follows:
Describe ...
File deleted
\documentclass[12pt]{article}
\usepackage{hyperref}
\hypersetup{colorlinks=true,
linkcolor=blue,
citecolor=blue,
filecolor=blue,
urlcolor=blue,
unicode=false}
\urlstyle{same}
\usepackage{enumitem,amssymb}
\newlist{todolist}{itemize}{2}
\setlist[todolist]{label=$\square$}
\usepackage{pifont}
\newcommand{\cmark}{\ding{51}}%
\newcommand{\xmark}{\ding{55}}%
\newcommand{\done}{\rlap{$\square$}{\raisebox{2pt}{\large\hspace{1pt}\cmark}}%
\hspace{-2.5pt}}
\newcommand{\wontfix}{\rlap{$\square$}{\large\hspace{1pt}\xmark}}
\begin{document}
\title{SRS and CA Checklist}
\author{Spencer Smith}
\date{\today}
\maketitle
% Show an item is done by \item[\done] Frame the problem
% Show an item will not be fixed by \item[\wontfix] profit
\begin{itemize}
\item Follows the template, all parts present
\begin{todolist}
\item Table of contents
\item Pages are numbered
\item Revision history included for major revisions
\item Sections from template are all present
\item Values of auxiliary constants are given (constants are used to improve
maintainability and to increase understandability)
\end{todolist}
\item Grammar, spelling, presentation
\begin{todolist}
\item No spelling mistakes (use a spell checker!)
\item No grammar mistakes (review, ask someone else to review (at least a few
sections))
\item Paragraphs are structured well (clear topic sentence, cohesive)
\item Paragraphs are concise (not wordy)
\item No Low Information Content (LIC) phrases
(\href{https://www.webpages.uidaho.edu/range357/extra-refs/empty-words.htm}{List
of LIC phrases})
\item All hyperlinks work
\item Every figure has a caption
\item Every table has a heading
\item Symbolic names are used for quantities, rather than literal values
\end{todolist}
\item LaTeX
\begin{todolist}
\item Template comments (plt) do not show in the pdf version, either by
removing them, or by turning them off.
\item References and labels are used so that maintenance is feasible
\end{todolist}
\item Overall qualities of documentation
\begin{todolist}
\item Specific programming language is listed
\item Specific linter tool is listed (if appropriate)
\item Specific coding standard is given
\item Specific unit testing framework is given
\item Investigation of code coverage measuring tools
\item Specific plans for Continuous Integration (CI), or an explanation that CI
is not being done
\item Specific performance measuring tools listed (like Valgrind), if
appropriate
\item Very careful use of random testing
\end{todolist}
\end{itemize}
\end{document}
File deleted
\documentclass[12pt, titlepage]{article}
\usepackage{booktabs}
\usepackage{tabularx}
\usepackage{hyperref}
\hypersetup{
colorlinks,
citecolor=blue,
filecolor=black,
linkcolor=red,
urlcolor=blue
}
\usepackage[round]{natbib}
\input{../../Comments}
\input{../../Common}
\begin{document}
\title{Project Title: Unit Verification and Validation Plan for \progname{}}
\author{Author Name}
\date{\today}
\maketitle
\pagenumbering{roman}
\section{Revision History}
\begin{tabularx}{\textwidth}{p{3cm}p{2cm}X}
\toprule {\bf Date} & {\bf Version} & {\bf Notes}\\
\midrule
Date 1 & 1.0 & Notes\\
Date 2 & 1.1 & Notes\\
\bottomrule
\end{tabularx}
~\newpage
\tableofcontents
\listoftables
\wss{Do not include if not relevant}
\listoffigures
\wss{Do not include if not relevant}
\newpage
\section{Symbols, Abbreviations and Acronyms}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l l}
\toprule
\textbf{symbol} & \textbf{description}\\
\midrule
T & Test\\
\bottomrule
\end{tabular}\\
\wss{symbols, abbreviations or acronyms -- you can reference the SRS
\citep{SRS}, MG or MIS tables if needed}
\newpage
\pagenumbering{arabic}
This document ... \wss{provide an introductory blurb and roadmap of the
unit V\&V plan}
\wss{If content in this document is already covered in the VnV System plan, you
do not have to repeat it.}
\section{General Information}
\subsection{Purpose}
\wss{Identify software that is being unit tested (verified).}
\subsection{Scope}
\wss{What modules are outside of the scope. If there are modules that are
developed by someone else, then you would say here if you aren't planning on
verifying them. There may also be modules that are part of your software, but
have a lower priority for verification than others. If this is the case,
explain your rationale for the ranking of module importance.}
\section{Plan}
\subsection{Verification and Validation Team}
\wss{You, your classmates and the course instructor. Maybe your supervisor.}
\subsection{Automated Testing and Verification Tools}
\wss{What tools are you using for automated testing. Likely a unit testing
framework and maybe a profiling tool, like ValGrind. Other possible tools
include a static analyzer, make, continuous integration tools, test coverage
tools, etc. Explain your plans for summarizing code coverage metrics.
Linters are another important class of tools. For the programming language
you select, you should look at the available linters. There may also be tools
that verify that coding standards have been respected, like flake9 for
Python.}
\subsection{Non-Testing Based Verification}
\wss{List any approaches like code inspection, code walkthrough, symbolic
execution etc. Enter not applicable if you do not plan on any non-testing
based verification.}
\section{Unit Test Description}
\wss{Reference your MIS and explain your overall philosophy for test case
selection.}
\subsection{Tests for Functional Requirements}
\wss{Most of the verification will be through automated unit testing. If
appropriate specific modules can be verified by a non-testing based
technique. That can also be documented in this section.}
\subsubsection{Module 1}
\wss{Include a blurb here to explain why the subsections below cover the module.
References to the MIS would be good. You will want tests from a black box
perspective and from a white box perspective. Explain to the reader how the
tests were selected.}
\begin{enumerate}
\item{test-id1\\}
Type: \wss{Functional, Dynamic, Manual, Automatic, Static etc. Most will
be automatic}
Initial State:
Input:
Output: \wss{The expected result for the given inputs}
Test Case Derivation: \wss{Justify the expected value given in the Output field}
How test will be performed:
\item{test-id2\\}
Type: \wss{Functional, Dynamic, Manual, Automatic, Static etc. Most will
be automatic}
Initial State:
Input:
Output: \wss{The expected result for the given inputs}
Test Case Derivation: \wss{Justify the expected value given in the Output field}
How test will be performed:
\item{...\\}
\end{enumerate}
\subsubsection{Module 2}
...
\subsection{Tests for Nonfunctional Requirements}
\wss{If there is a module that needs to be independently assessed for
performance, those test cases can go here. In some projects, planning for
nonfunctional tests of units will not be that relevant.}
\wss{These tests may involve collecting performance data from previously
mentioned functional tests.}
\subsubsection{Module ?}
\begin{enumerate}
\item{test-id1\\}
Type: \wss{Functional, Dynamic, Manual, Automatic, Static etc. Most will
be automatic}
Initial State:
Input/Condition:
Output/Result:
How test will be performed:
\item{test-id2\\}
Type: Functional, Dynamic, Manual, Static etc.
Initial State:
Input:
Output:
How test will be performed:
\end{enumerate}
\subsubsection{Module ?}
...
\subsection{Traceability Between Test Cases and Modules}
\wss{Provide evidence that all of the modules have been considered.}
\bibliographystyle{plainnat}
\bibliography{../../../refs/References}
\newpage
\section{Appendix}
\wss{This is where you can place additional information, as appropriate}
\subsection{Symbolic Parameters}
\wss{The definition of the test cases may call for SYMBOLIC\_CONSTANTS.
Their values are defined in this section for easy maintenance.}
\end{document}
\ No newline at end of file
No preview for this file type
......@@ -12,8 +12,8 @@
}
\usepackage[round]{natbib}
\input{../../Comments}
\input{../../Common}
\input{../Comments}
\input{../Common}
\begin{document}
......@@ -113,6 +113,19 @@ This document ... \wss{provide an introductory blurb and roadmap of the
the implementation. Potential techniques include code walkthroughs, code
inspection, static analyzers, etc.}
\subsection{Automated Testing and Verification Tools}
\wss{What tools are you using for automated testing. Likely a unit testing
framework and maybe a profiling tool, like ValGrind. Other possible tools
include a static analyzer, make, continuous integration tools, test coverage
tools, etc. Explain your plans for summarizing code coverage metrics.
Linters are another important class of tools. For the programming language
you select, you should look at the available linters. There may also be tools
that verify that coding standards have been respected, like flake9 for
Python.}
\wss{The details of this section will likely evolve as you get closer to the
implementation.}
\subsection{Software Validation Plan}
\wss{If there is any external data that can be used for validation, you should
......@@ -224,10 +237,126 @@ How test will be performed:
\wss{Provide a table that shows which test cases are supporting which
requirements.}
\section{Unit Test Description}
\wss{Reference your MIS and explain your overall philosophy for test case
selection.}
\wss{This section should not be filled in until after the MIS has
been completed.}
\subsection{Unit Testing Scope}
\wss{What modules are outside of the scope. If there are modules that are
developed by someone else, then you would say here if you aren't planning on
verifying them. There may also be modules that are part of your software, but
have a lower priority for verification than others. If this is the case,
explain your rationale for the ranking of module importance.}
\subsection{Tests for Functional Requirements}
\wss{Most of the verification will be through automated unit testing. If
appropriate specific modules can be verified by a non-testing based
technique. That can also be documented in this section.}
\subsubsection{Module 1}
\wss{Include a blurb here to explain why the subsections below cover the module.
References to the MIS would be good. You will want tests from a black box
perspective and from a white box perspective. Explain to the reader how the
tests were selected.}
\begin{enumerate}
\item{test-id1\\}
Type: \wss{Functional, Dynamic, Manual, Automatic, Static etc. Most will
be automatic}
Initial State:
Input:
Output: \wss{The expected result for the given inputs}
Test Case Derivation: \wss{Justify the expected value given in the Output field}
How test will be performed:
\item{test-id2\\}
Type: \wss{Functional, Dynamic, Manual, Automatic, Static etc. Most will
be automatic}
Initial State:
Input:
Output: \wss{The expected result for the given inputs}
Test Case Derivation: \wss{Justify the expected value given in the Output field}
How test will be performed:
\item{...\\}
\end{enumerate}
\subsubsection{Module 2}
...
\subsection{Tests for Nonfunctional Requirements}
\wss{If there is a module that needs to be independently assessed for
performance, those test cases can go here. In some projects, planning for
nonfunctional tests of units will not be that relevant.}
\wss{These tests may involve collecting performance data from previously
mentioned functional tests.}
\subsubsection{Module ?}
\begin{enumerate}
\item{test-id1\\}
Type: \wss{Functional, Dynamic, Manual, Automatic, Static etc. Most will
be automatic}
Initial State:
Input/Condition:
Output/Result:
How test will be performed:
\item{test-id2\\}
Type: Functional, Dynamic, Manual, Static etc.
Initial State:
Input:
Output:
How test will be performed:
\end{enumerate}
\subsubsection{Module ?}
...
\subsection{Traceability Between Test Cases and Modules}
\wss{Provide evidence that all of the modules have been considered.}
\bibliographystyle{plainnat}
\bibliography{../../../refs/References}
\bibliography{../../refs/References}
\newpage
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment