You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

2090 lines
95 KiB

2 months ago
  1. %
  2. % Copyright (c) 1995-2015, Regents of the University of Colorado
  3. %
  4. % All rights reserved.
  5. %
  6. \documentclass[11pt]{article}
  7. \usepackage{makeidx}
  8. \usepackage{graphicx,color}
  9. %\usepackage[pdfpagelabels=false,pageanchor,hyperindex,breaklinks,plainpages=false]{hyperref}
  10. \usepackage[pdfpagelabels=false,pageanchor,hyperindex,plainpages=false]{hyperref}
  11. \newcommand{\eidx}[1]{\index{#1@\emph{#1}}}
  12. \newcommand{\vnumber}{2.7.0}
  13. \title{CUDD: CU Decision Diagram Package\\Release \vnumber}
  14. \author{Fabio Somenzi\\
  15. Department of Electrical, Computer, and Energy Engineering\\
  16. University of Colorado at Boulder\\
  17. $<$Fabio@Colorado.EDU$>$}
  18. \makeindex
  19. \begin{document}
  20. \bibliographystyle{plain}
  21. \maketitle
  22. \tableofcontents
  23. \clearpage
  24. %----------------------------------------
  25. \section{Introduction}
  26. \label{sec:intro}
  27. The CUDD package provides functions to manipulate Binary Decision
  28. Diagrams\index{BDD} (BDDs) \cite{BDD,BBR},
  29. Algebraic Decision Diagrams\index{ADD} (ADDs)
  30. \cite{Bahar93}, and Zero-suppressed Binary Decision
  31. Diagrams\index{ZDD} (ZDDs)
  32. \cite{Minato93}. BDDs are used to represent
  33. switching\index{function!switching} functions; ADDs are used to
  34. represent functions from $\{0,1\}^n$ to an arbitrary set. ZDDs
  35. represent switching\index{function!switching} functions like BDDs;
  36. however, they are much more efficient than BDDs when the functions to
  37. be represented are characteristic\index{function!characteristic}
  38. functions of cube\index{cube sets} sets, or in general, when the
  39. ON-set\index{function!ON-set} of the function to be represented is
  40. very sparse. They are inferior to BDDs in other cases.
  41. The package provides a large set of operations on BDDs, ADDs, and
  42. ZDDs, functions to convert BDDs into ADDs or ZDDs and vice versa, and
  43. a large assortment of variable reordering\index{reordering} methods.
  44. The CUDD package can be used in three ways:
  45. \begin{itemize}
  46. \item As a black box\index{box!black}. In this case, the application
  47. program that needs to manipulate decision diagrams only uses the
  48. exported functions of the package. The rich set of functions
  49. included in the CUDD package allows many applications to be written
  50. in this way. Section~\ref{sec:user} describes how to use the
  51. exported functions of the package. An application written in terms
  52. of the exported functions of the package needs not concern itself
  53. with the details of variable reordering\index{reordering}, which may
  54. take place behind the scenes.
  55. \item As a clear box\index{box!clear}. When writing a sophisticated
  56. application based on decision diagrams, efficiency often dictates
  57. that some functions be implemented as direct recursive manipulation
  58. of the diagrams, instead of being written in terms of existing
  59. primitive functions. Section~\ref{sec:prog} explains how to add new
  60. functions to the CUDD package. It also details how to write a
  61. recursive function that may be interrupted by
  62. dynamic\index{reordering!dynamic} variable reordering.
  63. \item Through an interface. Object-oriented languages like C++ and
  64. Perl5 can free the programmer from the burden of memory management.
  65. A C++ interface is included in the distribution of CUDD. It
  66. automatically frees decision diagrams that are no longer used by the
  67. application and overloads operators. Almost all the functionality
  68. provided by the CUDD exported functions is available through the C++
  69. interface, which is especially recommended for fast prototyping.
  70. Section~\ref{sec:cpp} explains how to use the interface. A Perl5
  71. interface also exists and is ditributed separately. (See
  72. Section~\ref{sec:getFriends}.)
  73. \end{itemize}
  74. In the following, the reader is supposed to be familiar with the basic
  75. ideas about decision diagrams, as found, for instance, in \cite{BBR}.
  76. %----------------------------------------
  77. \section{How to Get CUDD}
  78. \label{sec:getting}
  79. \subsection{The CUDD Package}
  80. \label{sec:getCUDD}
  81. The CUDD package is available via anonymous FTP\index{FTP} from
  82. vlsi.Colorado.EDU\@. A compressed tar file named
  83. \texttt{cudd-\vnumber.tar.gz} can be found in directory \texttt{pub}.
  84. Once you have this file,
  85. \begin{quote}
  86. \tt gzip\index{gzip} -dc cudd-\vnumber.tar.gz | tar xvf -
  87. \end{quote}
  88. will create directory \texttt{cudd-\vnumber} and its subdirectories.
  89. These directories contain the decision diagram package, a few support
  90. libraries\index{libraries}, and a test application based on the
  91. decision diagram package. There is a README\index{README file} file
  92. with instructions on configuration\index{configuration} and
  93. installation\index{installation} in \texttt{cudd-\vnumber}. In short,
  94. CUDD uses the GNU Autotools for its build.
  95. Once you have made the libraries and program, you can type
  96. \texttt{make check} to perform a sanity check. Among other things,
  97. \texttt{make check} executes commands like
  98. \begin{quote}
  99. \tt cd nanotrav\index{nanotrav} \\
  100. nanotrav -p 1 -autodyn -reordering sifting -trav mult32a.blif
  101. \end{quote}
  102. This command runs a simple-minded FSM traversal program on a simple
  103. model. (On a reasonable machine, it takes less than 0.5 s.) The output
  104. produced by the program is checked against
  105. \texttt{cudd-\vnumber/nanotrav/mult32a.out}. More information on the
  106. \texttt{nanotrav\index{nanotrav}} test program can be found in the file
  107. \texttt{cudd-\vnumber/nanotrav/README\index{README file}}.
  108. If you want to be notified of new releases of the CUDD package, send a
  109. message to \texttt{Fabio@Colorado.EDU}.
  110. \subsection{CUDD Friends}
  111. \label{sec:getFriends}
  112. Two CUDD extensions are available via anonymous FTP\index{FTP} from
  113. vlsi.Colorado.EDU\@.
  114. \begin{itemize}
  115. \item \emph{PerlDD} is an object-oriented Perl5 interface to CUDD. It
  116. is organized as a standard Perl extension module. The Perl interface
  117. is at a somewhat higher level than the C++ interface, but it is not
  118. as complete.
  119. \item \emph{DDcal} is a graphic BDD calculator based on CUDD, Perl-Tk,
  120. and dot. (See Section~\ref{sec:dump} for information on \emph{dot}.)
  121. \end{itemize}
  122. %----------------------------------------
  123. \section{User's Manual}
  124. \label{sec:user}
  125. This section describes the use of the CUDD package as a black box.
  126. \subsection{Compiling and Linking}
  127. \label{sec:compileExt}\index{compiling}
  128. To build an application that uses the CUDD package, you should add
  129. \begin{verbatim}
  130. #include "cudd.h"
  131. \end{verbatim}
  132. \index{cudd.h}
  133. to your source files, and should link
  134. \verb|libcudd.a|\index{libraries!cudd} to your executable.
  135. Keep in mind that whatever flags affect the size of data
  136. structures---for instance the flags used to use 64-bit pointers where
  137. available---must be specified when compiling both CUDD and the files
  138. that include its header files.
  139. \subsection{Basic Data Structures}
  140. \label{sec:struct}
  141. \subsubsection{Nodes}
  142. \label{sec:nodes}
  143. BDDs, ADDs, and ZDDs are made of DdNode's. A DdNode\index{DdNode}
  144. (node\index{node} for short) is a structure with several fields. Those
  145. that are of interest to the application that uses the CUDD package as
  146. a black box are the variable index\index{node!variable index}, the
  147. reference\index{node!reference count} count, and the value. The
  148. remaining fields are pointers that connect nodes among themselves and
  149. that are used to implement the unique\index{table!unique} table. (See
  150. Section~\ref{sec:manager}.)
  151. The \emph{index} field holds the name of the variable that labels the
  152. node. The index of a variable is a permanent attribute that reflects
  153. the order\index{variable!order} of creation. Index 0 corresponds to
  154. the variable created first. On a machine with 32-bit pointers, the
  155. maximum number of variables is the largest value that can be stored in
  156. an unsigned short integer minus 1. The largest index is reserved for
  157. the constant\index{node!constant} nodes. When 64-bit pointers are
  158. used, the maximum number of variables is the largest value that can be
  159. stored in an unsigned integer minus 1.
  160. When variables are reordered to reduce the size of the decision
  161. diagrams, the variables may shift in the order, but they retain their
  162. indices. The package keeps track of the variable
  163. permutation\index{variable!permutation} (and its inverse). The
  164. application is not affected by variable reordering\index{reordering},
  165. except in the following cases.
  166. \begin{itemize}
  167. \item If the application uses generators\index{generator}
  168. (\emph{Cudd\_ForeachCube} \eidx{Cudd\_ForeachCube} and
  169. \emph{Cudd\_ForeachNode}\eidx{Cudd\_ForeachNode}) and reordering is
  170. enabled, then it must take care not to call any operation that may
  171. create new nodes (and hence possibly trigger reordering). This is
  172. because the cubes (i.e., paths) and nodes of a diagram change as a
  173. result of reordering.
  174. \item If the application uses
  175. \emph{Cudd\_bddConstrain}\eidx{Cudd\_bddConstrain} and reordering
  176. takes place, then the property of \emph{Cudd\_bddConstrain} of
  177. being an image restrictor is lost.
  178. \end{itemize}
  179. The CUDD package relies on garbage\index{garbage collection}
  180. collection to reclaim the memory used by diagrams that are no longer
  181. in use. The scheme employed for garbage collection is based on keeping
  182. a reference\index{node!reference count} count for each node. The
  183. references that are counted are both the internal references
  184. (references from other nodes) and external references (typically
  185. references from the calling environment). When an application creates
  186. a new BDD\index{BDD}, ADD\index{ADD}, or ZDD\index{ZDD}, it must
  187. increase its reference count explicitly, through a call to
  188. \emph{Cudd\_Ref}\eidx{Cudd\_Ref}. Similarly, when a diagram is no
  189. longer needed, the application must call
  190. \emph{Cudd\_RecursiveDeref}\eidx{Cudd\_RecursiveDeref} (for BDDs and
  191. ADDs) or \emph{Cudd\_RecursiveDerefZdd}\eidx{Cudd\_RecursiveDerefZdd}
  192. (for ZDDs) to ``recycle\index{node!recycling}'' the nodes of the
  193. diagram.
  194. Terminal\index{node!constant!value} nodes carry a value. This is especially
  195. important for ADDs. By default, the value is a double%
  196. \index{floating point!double (C type)}.
  197. To change to something different (e.g., an integer), the
  198. package must be modified and recompiled. Support for this process is
  199. very rudimentary.
  200. \subsubsection{The Manager}
  201. \index{manager}\label{sec:manager}
  202. All nodes used in BDDs, ADDs, and ZDDs are kept in special
  203. hash\index{table!hash} tables called the
  204. \emph{unique\index{table!unique} tables}. Specifically, BDDs and ADDs
  205. share the same unique table, whereas ZDDs have their own table. As
  206. the name implies, the main purpose of the unique table is to guarantee
  207. that each node is unique; that is, there is no other node labeled by
  208. the same variable and with the same children. This uniqueness
  209. property makes decision diagrams canonical\index{canonical}. The
  210. unique\index{table!unique} tables and some auxiliary data structures
  211. make up the DdManager\index{DdManager} (manager\index{manager} for
  212. short). Though the application that uses only the exported functions
  213. needs not be concerned with most details of the manager, it has to
  214. deal with the manager in the following sense. The application must
  215. initialize the manager by calling an appropriate function. (See
  216. Section~\ref{sec:init}.) Subsequently, it must pass a pointer to the
  217. manager to all the functions that operate on decision diagrams.
  218. % With the exception of a few statistical counters\index{statistical
  219. % counters}, there are no global\index{global variables} variables in
  220. % the CUDD package. Therefore, it is possible to have multiple
  221. % managers simultaneously active in the same application.\footnote{The
  222. % global statistical counters are used locally; hence they are
  223. % compatible with the use of multiple managers.} It is the pointers to
  224. % the managers that tell the functions on what data they should operate.
  225. \subsubsection{Cache}
  226. \index{cache}\label{sec:memoize}
  227. Efficient recursive manipulation of decision diagrams requires the use
  228. of a table to store computed results. This table\index{table!computed}
  229. is called here the \emph{cache\index{cache}} because it is
  230. effectively handled like a cache of variable but limited capacity. The
  231. CUDD package starts by default with a small cache, and increases its
  232. size until either no further benefit is achieved, or a limit size is
  233. reached. The user can influence this policy by choosing initial and
  234. limit values for the cache size.
  235. Too small a cache will cause frequent overwriting of useful results.
  236. Too large a cache will cause overhead, because the whole cache is
  237. scanned every time garbage\index{garbage collection} collection takes
  238. place. The optimal parameters depend on the specific application. The
  239. default parameters work reasonably well for a large spectrum of
  240. applications.
  241. The cache\index{cache} of the CUDD package is used by most recursive
  242. functions of the package, and can be used by user-supplied functions
  243. as well. (See Section~\ref{sec:cache}.)
  244. \subsection{Initializing and Shutting Down a DdManager}
  245. \index{DdManager}\label{sec:init}
  246. To use the functions in the CUDD package, one has first to initialize
  247. the package itself by calling \emph{Cudd\_Init}\eidx{Cudd\_Init}.
  248. This function takes four parameters:
  249. \begin{itemize}
  250. \item numVars\index{numVars}: It is the initial number of variables
  251. for BDDs and ADDs. If the total number of variables needed by the
  252. application is known, then it is slightly more efficient to create a
  253. manager with that number of variables. If the number is unknown, it
  254. can be set to 0, or to any other lower bound on the number of
  255. variables. Requesting more variables than are actually needed is
  256. not incorrect, but is not efficient.
  257. \item numVarsZ\index{numVarsZ}: It is the initial number of variables
  258. for ZDDs. See Sections~\ref{sec:basicZDD} and~\ref{sec:convertZ} for
  259. a discussion of the value of this argument.
  260. \item numSlots\index{numSlots}: Determines the initial size of each
  261. subtable\index{subtable} of the unique\index{table!unique} table.
  262. There is a subtable for each variable. The size of each subtable is
  263. dynamically adjusted to reflect the number of nodes. It is normally
  264. O.K. to use the default value for this parameter, which is
  265. CUDD\_UNIQUE\_SLOTS\index{CUDD\_UNIQUE\_SLOTS}.
  266. \item cacheSize\index{cacheSize}: It is the initial size (number of
  267. entries) of the cache\index{cache}. Its default value is
  268. CUDD\_CACHE\_SLOTS\index{CUDD\_CACHE\_SLOTS}.
  269. \item maxMemory\index{maxMemory}: It is the target value for the
  270. maximum memory occupation (in bytes). The package uses this value to
  271. decide two parameters.
  272. \begin{itemize}
  273. \item the maximum size to which the cache will grow, regardless of
  274. the hit rate or the size of the unique\index{table!unique} table.
  275. \item the maximum size to which growth of the unique table will be
  276. preferred to garbage collection.
  277. \end{itemize}
  278. If maxMemory is set to 0, CUDD tries to guess a good value based on
  279. the available memory.
  280. \end{itemize}
  281. A typical call to \emph{Cudd\_Init}\eidx{Cudd\_Init} may look
  282. like this:
  283. \begin{verbatim}
  284. manager = Cudd_Init(0,0,CUDD_UNIQUE_SLOTS,CUDD_CACHE_SLOTS,0);
  285. \end{verbatim}
  286. To reclaim all the memory associated with a manager, an application
  287. must call \emph{Cudd\_Quit}\eidx{Cudd\_Quit}. This is normally
  288. done before exiting.
  289. \subsection{Setting Parameters}
  290. \label{sec:params}
  291. The package provides several functions to set the parameters that
  292. control various functions. For instance, the package has an automatic
  293. way of determining whether a larger unique\index{table!unique} table
  294. would make the application run faster. In that case, the package
  295. enters a ``fast growth\index{table!growth}'' mode in which resizing of
  296. the unique subtables is favored over garbage\index{garbage collection}
  297. collection. When the unique table reaches a given size, however, the
  298. package returns to the normal ``slow growth'' mode, even though the
  299. conditions that caused the transition to fast growth still prevail.
  300. The limit size for fast growth\index{growth} can be read by
  301. \emph{Cudd\_ReadLooseUpTo}\eidx{Cudd\_ReadLooseUpto} and changed by
  302. \emph{Cudd\_SetLooseUpTo}\eidx{Cudd\_SetLooseUpTo}. Similar pairs of
  303. functions exist for several other parameters. See also
  304. Section~\ref{sec:stats}.
  305. \subsection{Constant Functions}
  306. \index{node!constant}\label{sec:const}
  307. The CUDD Package defines several constant functions. These functions
  308. are created when the manager\index{manager} is initialized, and are accessible
  309. through the manager itself.
  310. \subsubsection{One, Logic Zero, and Arithmetic Zero}
  311. \index{zero!logical}\index{zero!arithmetic}\label{sec:zero}
  312. The constant\index{node!constant} 1 (returned by
  313. \emph{Cudd\_ReadOne}\eidx{Cudd\_ReadOne}) is common to BDDs, ADDs, and
  314. ZDDs. However, its meaning is different for ADDs and BDDs, on the one
  315. hand, and ZDDs, on the other hand. The diagram consisting of the
  316. constant 1 node only represents the constant 1 function for ADDs and
  317. BDDs. For ZDDs, its meaning depends on the number of variables: It is
  318. the conjunction of the complements of all variables. Conversely, the
  319. representation of the constant 1 function depends on the number of
  320. variables. The constant 1 function of $n$ variables is returned by
  321. \emph{Cudd\_ReadZddOne}\eidx{Cudd\_ReadZddOne}.
  322. The constant 0 is common to ADDs and ZDDs, but not to BDDs. The
  323. BDD\index{BDD} logic 0 is {\bf not} associated with the constant 0
  324. function: It is obtained by complementation
  325. (\emph{Cudd\_Not}\eidx{Cudd\_Not}) of the constant 1. (It is also
  326. returned by \emph{Cudd\_ReadLogicZero}\eidx{Cudd\_ReadLogicZero}.)
  327. All other constants are specific to ADDs.
  328. \subsubsection{Predefined Constants}
  329. \label{sec:predef-const}
  330. Besides 0 (returned by \emph{Cudd\_ReadZero}\eidx{Cudd\_ReadZero})
  331. and 1, the following constant\index{node!constant} functions are
  332. created at initialization time.
  333. \begin{enumerate}
  334. \item PlusInfinity\index{PlusInfinity} and
  335. MinusInfinity\index{MinusInfinity}: On computers implementing the
  336. IEEE\index{floating point!IEEE Standard 754} standard 754 for
  337. floating-point\index{floating point} arithmetic, these two constants
  338. are set to the signed infinities\index{infinities}. The values of
  339. these constants are returned by
  340. \emph{Cudd\_ReadPlusInfinity}\eidx{Cudd\_ReadPlusInfinity} and
  341. \emph{Cudd\_ReadMinusInfinity}\eidx{Cudd\_ReadMinusInfinity}.
  342. \item Epsilon\index{Epsilon}: This constant, initially set to
  343. $10^{-12}$, is used in comparing floating point values for equality.
  344. Its value is returned by the function
  345. \emph{Cudd\_ReadEpsilon}\eidx{Cudd\_ReadEpsilon}, and it can be
  346. modified by calling \emph{Cudd\_SetEpsilon}\eidx{Cudd\_SetEpsilon}.
  347. Unlike the other constants, it does not correspond to a node.
  348. \end{enumerate}
  349. \subsubsection{Background}
  350. \index{background value}\label{sec:background}
  351. The background value is a constant\index{node!constant} typically used
  352. to represent non-existing arcs in graphs. Consider a shortest path
  353. problem. Two nodes that are not connected by an arc can be regarded as
  354. being joined by an arc\index{graph!arc length} of infinite length. In
  355. shortest path problems, it is therefore convenient to set the
  356. background value to PlusInfinity\index{PlusInfinity}. In network flow
  357. problems, on the other hand, two nodes not connected by an arc can be
  358. regarded as joined by an arc\index{graph!arc capacity} of 0 capacity.
  359. For these problems, therefore, it is more convenient to set the
  360. background value to 0. In general, when representing
  361. sparse\index{matrix!sparse} matrices, the background value is the value that
  362. is assumed implicitly.
  363. At initialization, the background value is set to 0. It can be read
  364. with \emph{Cudd\_ReadBackground}\eidx{Cudd\_ReadBackground}, and
  365. modified with \emph{Cudd\_SetBackground}. The background value
  366. affects procedures that read sparse matrices and graphs
  367. (like \emph{Cudd\_addRead}\eidx{Cudd\_addRead} and
  368. \emph{Cudd\_addHarwell}\eidx{Cudd\_addHarwell}), procedures that print
  369. out sum-of-product\index{function!sum of products} expressions for
  370. ADDs (\emph{Cudd\_PrintMinterm}\eidx{Cudd\_PrintMinterm}), generators
  371. of cubes (\emph{Cudd\_ForeachCube}\eidx{Cudd\_ForeachCube}), and
  372. procedures that count minterms\index{function!minterms}
  373. (\emph{Cudd\_CountMinterm}\eidx{Cudd\_CountMinterm}).
  374. \subsubsection{New Constants}
  375. \label{sec:newconst}
  376. New constant\index{node!constant} can be created by calling
  377. \emph{Cudd\_addConst}\eidx{Cudd\_addConst}. This function will
  378. retrieve the ADD\index{ADD} for the desired constant, if it already
  379. exist, or it will create a new one. Obviously, new constants should
  380. only be used when manipulating ADDs.
  381. \subsection{Creating Variables}
  382. \label{sec:newvar}
  383. Decision diagrams are typically created by combining simpler decision
  384. diagrams. The simplest decision diagrams, of course, cannot be
  385. created in that way. Constant functions have been discussed in
  386. Section~\ref{sec:const}. In this section we discuss the simple
  387. variable functions, also known as \emph{projection\index{projection
  388. functions} functions}.
  389. \subsubsection{New BDD and ADD Variables}
  390. \label{sec:BDDADDvar}
  391. The projection\index{projection functions} functions are distinct for
  392. BDDs and ADDs. A projection function for BDDs consists of an internal
  393. node with both outgoing arcs pointing to the constant 1. The
  394. \emph{else} arc\index{arc!complement} is complemented.
  395. An ADD projection function, on the other hand, has the \emph{else}
  396. pointer directed to the arithmetic\index{zero!arithmetic} zero
  397. function. One should never mix the two types of variables. BDD
  398. variables should be used when manipulating BDDs, and ADD variables
  399. should be used when manipulating ADDs. Three functions are provided
  400. to create BDD variables:
  401. \begin{itemize}
  402. \item \emph{Cudd\_bddIthVar}\eidx{Cudd\_bddIthVar}: Returns
  403. the projection\index{projection functions} function with index $i$.
  404. If the function does not exist, it is created.
  405. \item \emph{Cudd\_bddNewVar}\eidx{Cudd\_bddNewVar}: Returns a
  406. new projection\index{projection functions} function, whose index is
  407. the largest index in use at the time of the call, plus 1.
  408. \item \emph{Cudd\_bddNewVarAtLevel}\eidx{Cudd\_bddNewVarAtLevel}:
  409. Similar to \emph{Cudd\_bddNewVar}\eidx{Cudd\_bddNewVar}. In
  410. addition it allows to specify the position in the variable
  411. order\index{variable!order} at which the new variable should be
  412. inserted. In contrast, \emph{Cudd\_bddNewVar}\eidx{Cudd\_bddNewVar}
  413. adds the new variable at the end of the order.
  414. \end{itemize}
  415. The analogous functions for ADDs are
  416. \emph{Cudd\_addIthVar}\eidx{Cudd\_addIthVar},
  417. \emph{Cudd\_addNewVar}\eidx{Cudd\_addNewVar}, and
  418. \emph{Cudd\_addNewVarAtLevel}\eidx{Cudd\_addNewVarAtLevel}.
  419. \subsubsection{New ZDD Variables}
  420. \index{ZDD}\label{sec:ZDDvars}
  421. Unlike the projection functions of BDDs and ADDs, the
  422. projection\index{projection functions} functions of ZDDs have diagrams
  423. with $n+1$ nodes, where $n$ is the number of variables. Therefore the
  424. ZDDs of the projection functions change when new variables are added.
  425. This will be discussed in Section~\ref{sec:basicZDD}. Here we assume
  426. that the number of variables is fixed. The ZDD of the $i$-th
  427. projection function is returned by
  428. \emph{Cudd\_zddIthVar}\eidx{Cudd\_zddIthVar}.
  429. \subsection{Basic BDD Manipulation}
  430. \index{BDD}\label{sec:basicBDD}
  431. Common manipulations of BDDs can be accomplished by calling
  432. \emph{Cudd\_bddIte}. This function takes three BDDs, $f$, $g$, and
  433. $h$, as arguments and computes $f\cdot g + f'\cdot h$. Like all the
  434. functions that create new BDDs or ADDs, \emph{Cudd\_bddIte}\eidx{Cudd\_bddIte} returns a result that must be
  435. explicitly referenced by the caller. \emph{Cudd\_bddIte} can be used
  436. to implement all two-argument Boolean functions. However, the package
  437. also provides \emph{Cudd\_bddAnd}\eidx{Cudd\_bddAnd} as well as the
  438. other two-operand Boolean functions, which are slightly more efficient
  439. when a two-operand function is called for. The following fragment of
  440. code illustrates how to build the BDD for the function $f =
  441. x_0'x_1'x_2'x_3'$.
  442. \begin{verbatim}
  443. DdManager *manager;
  444. DdNode *f, *var, *tmp;
  445. int i;
  446. ...
  447. f = Cudd_ReadOne(manager);
  448. Cudd_Ref(f);
  449. for (i = 3; i >= 0; i--) {
  450. var = Cudd_bddIthVar(manager,i);
  451. tmp = Cudd_bddAnd(manager,Cudd_Not(var),f);
  452. Cudd_Ref(tmp);
  453. Cudd_RecursiveDeref(manager,f);
  454. f = tmp;
  455. }
  456. \end{verbatim}
  457. This example illustrates the following points:
  458. \begin{itemize}
  459. \item Intermediate results must be ``referenced'' and
  460. ``dereferenced.'' However, \texttt{var} is a
  461. projection\index{projection functions} function, and its
  462. reference\index{node!reference count} count is always greater than
  463. 0. Therefore, there is no call to \emph{Cudd\_Ref}\eidx{Cudd\_Ref}.
  464. \item The new \texttt{f} must be assigned to a temporary variable
  465. (\texttt{tmp} in this example). If the result of
  466. \emph{Cudd\_bddAnd}\eidx{Cudd\_bddAnd} were assigned directly to
  467. \texttt{f}, the old \texttt{f} would be lost, and there would be no
  468. way to free its nodes.
  469. \item The statement \texttt{f = tmp} has the same effect as:
  470. \begin{verbatim}
  471. f = tmp;
  472. Cudd_Ref(f);
  473. Cudd_RecursiveDeref(manager,tmp);
  474. \end{verbatim}
  475. but is more efficient. The reference\index{node!reference count} is
  476. ``passed'' from \texttt{tmp} to \texttt{f}, and \texttt{tmp} is now
  477. ready to be reutilized.
  478. \item It is normally more efficient to build BDDs ``bottom-up.'' This
  479. is why the loop goes from 3 to 0. Notice, however, that after
  480. variable reordering, higher index does not necessarily mean ``closer
  481. to the bottom.'' Of course, in this simple example, efficiency is
  482. not a concern.
  483. \item Had we wanted to conjoin the variables in a bottom-up fashion
  484. even after reordering, we should have used
  485. \emph{Cudd\_ReadInvPerm}\eidx{Cudd\_ReadInvPerm}. One has to be
  486. careful, though, to fix the order of conjunction before entering the
  487. loop. Otherwise, if reordering takes place, it is possible to use
  488. one variable twice and skip another variable.
  489. \end{itemize}
  490. \subsection{Basic ADD Manipulation}
  491. \index{ADD}\label{sec:basicADD}
  492. The most common way to manipulate ADDs is via
  493. \emph{Cudd\_addApply}\eidx{Cudd\_addApply}. This function can apply a
  494. wide variety of operators to a pair of ADDs. Among the available
  495. operators are addition, multiplication, division, minimum, maximum,
  496. and Boolean operators that work on ADDs whose leaves are restricted to
  497. 0 and 1 (0-1 ADDs).
  498. The following fragment of code illustrates how to build the ADD for
  499. the function $f = 5x_0x_1x_2x_3$.
  500. \begin{verbatim}
  501. DdManager *manager;
  502. DdNode *f, *var, *tmp;
  503. int i;
  504. ...
  505. f = Cudd_addConst(manager,5);
  506. Cudd_Ref(f);
  507. for (i = 3; i >= 0; i--) {
  508. var = Cudd_addIthVar(manager,i);
  509. Cudd_Ref(var);
  510. tmp = Cudd_addApply(manager,Cudd_addTimes,var,f);
  511. Cudd_Ref(tmp);
  512. Cudd_RecursiveDeref(manager,f);
  513. Cudd_RecursiveDeref(manager,var);
  514. f = tmp;
  515. }
  516. \end{verbatim}
  517. This example, contrasted to the example of BDD manipulation,
  518. illustrates the following points:
  519. \begin{itemize}
  520. \item The ADD projection\index{projection functions} function are not
  521. maintained by the manager. It is therefore necessary to
  522. reference\index{node!reference} and
  523. dereference\index{node!dereference} them.
  524. \item The product of two ADDs is computed by calling
  525. \emph{Cudd\_addApply}\eidx{Cudd\_addApply} with
  526. \emph{Cudd\_addTimes}\eidx{Cudd\_addTimes} as parameter. There is
  527. no ``apply'' function for BDDs, because
  528. \emph{Cudd\_bddAnd}\eidx{Cudd\_bddAnd} and
  529. \emph{Cudd\_bddXor}\eidx{Cudd\_bddXor} plus complementation are
  530. sufficient to implement all two-argument Boolean functions.
  531. \end{itemize}
  532. \subsection{Basic ZDD Manipulation}
  533. \index{ZDD}\label{sec:basicZDD}
  534. ZDDs are often generated by converting\index{conversion!of BDDs to ZDDs}
  535. existing BDDs. (See Section~\ref{sec:convertZ}.) However, it is also
  536. possible to build ZDDs by applying Boolean operators to other ZDDs,
  537. starting from constants and projection\index{projection functions}
  538. functions. The following fragment of code illustrates how to build
  539. the ZDD for the function $f = x_0'+x_1'+x_2'+x_3'$. We assume that the
  540. four variables already exist in the manager when the ZDD for $f$ is
  541. built. Note the use of De Morgan's law.
  542. \begin{verbatim}
  543. DdManager *manager;
  544. DdNode *f, *var, *tmp;
  545. int i;
  546. manager = Cudd_Init(0,4,CUDD_UNIQUE_SLOTS,
  547. CUDD_CACHE_SLOTS,0);
  548. ...
  549. tmp = Cudd_ReadZddOne(manager,0);
  550. Cudd_Ref(tmp);
  551. for (i = 3; i >= 0; i--) {
  552. var = Cudd_zddIthVar(manager,i);
  553. Cudd_Ref(var);
  554. f = Cudd_zddIntersect(manager,var,tmp);
  555. Cudd_Ref(f);
  556. Cudd_RecursiveDerefZdd(manager,tmp);
  557. Cudd_RecursiveDerefZdd(manager,var);
  558. tmp = f;
  559. }
  560. f = Cudd_zddDiff(manager,Cudd_ReadZddOne(manager,0),tmp);
  561. Cudd_Ref(f);
  562. Cudd_RecursiveDerefZdd(manager,tmp);
  563. \end{verbatim}
  564. This example illustrates the following points:
  565. \begin{itemize}
  566. \item The projection\index{projection functions} functions are
  567. referenced, because they are not maintained by the manager.
  568. \item Complementation is obtained by subtracting from the constant 1
  569. function.
  570. \item The result of \emph{Cudd\_ReadZddOne}\eidx{Cudd\_ReadZddOne}
  571. does not require referencing.
  572. \end{itemize}
  573. CUDD provides functions for the manipulation of
  574. covers\index{function!cover} represented by ZDDs. For instance,
  575. \emph{Cudd\_zddIsop}\eidx{Cudd\_zddIsop} builds a ZDD representing an
  576. irredundant\index{function!cover!irredundant} sum of products for the
  577. incompletely specified function defined by the two BDDs $L$ and $U$.
  578. \emph{Cudd\_zddWeakDiv}\eidx{Cudd\_zddWeakDiv} performs the weak
  579. division of two covers given as ZDDs. These functions expect the two
  580. ZDD variables corresponding to the two literals of the function
  581. variable to be adjacent. One has to create variable groups (see
  582. Section~\ref{sec:reordZ}) for reordering\index{reordering!of ZDDs} of
  583. the ZDD variables to work. BDD automatic reordering is safe even
  584. without groups: If realignment of ZDD and ADD/BDD variables is
  585. requested (see Section~\ref{sec:consist}) groups will be kept
  586. adjacent.
  587. \subsection{Converting ADDs to BDDs and Vice Versa}
  588. \index{conversion!of ADDs to BDDs}
  589. \index{conversion!of BDDs to ADDs}\label{sec:convert}
  590. Several procedures are provided to convert ADDs to BDDs, according to
  591. different criteria.
  592. (\emph{Cudd\_addBddPattern}\eidx{Cudd\_addBddPattern},
  593. \emph{Cudd\_addBddInterval}\eidx{Cudd\_addBddInterval}, and
  594. \emph{Cudd\_addBddThreshold}\eidx{Cudd\_addBddThreshold}.) The
  595. conversion from BDDs to ADDs
  596. (\emph{Cudd\_BddToAdd}\eidx{Cudd\_BddToAdd}) is based on the simple
  597. principle of mapping the logical 0\index{zero!logical} and 1 on the
  598. arithmetic\index{zero!arithmetic} 0 and 1. It is also possible to
  599. convert an ADD with integer values (more precisely, floating point
  600. numbers with 0 fractional part) to an array of BDDs by repeatedly
  601. calling \emph{Cudd\_addIthBit}\eidx{Cudd\_addIthBit}.
  602. \subsection{Converting BDDs to ZDDs and Vice Versa}
  603. \index{conversion!of ZDDs to BDDs}
  604. \index{conversion!of BDDs to ZDDs}\label{sec:convertZ}
  605. Many applications first build a set of BDDs and then derive ZDDs from
  606. the BDDs. These applications should create the manager with 0
  607. ZDD\index{ZDD} variables and create the BDDs. Then they should call
  608. \emph{Cudd\_zddVarsFromBddVars}\eidx{Cudd\_zddVarsFromBddVars} to
  609. create the necessary ZDD variables---whose number is likely to be
  610. known once the BDDs are available. This approach eliminates the
  611. difficulties that arise when the number of ZDD variables changes while
  612. ZDDs are being built.
  613. The simplest conversion from BDDs to ZDDs is a simple change of
  614. representation, which preserves the functions. Simply put, given a BDD
  615. for $f$, a ZDD for $f$ is requested. In this case the correspondence
  616. between the BDD variables and ZDD variables is one-to-one. Hence,
  617. \emph{Cudd\_zddVarsFromBddVars} should be called with the
  618. \emph{multiplicity} parameter equal to 1. The conversion proper can
  619. then be performed by calling
  620. \emph{Cudd\_zddPortFromBdd}\eidx{Cudd\_zddPortFromBdd}. The inverse
  621. transformation is performed by
  622. \emph{Cudd\_zddPortToBdd}\eidx{Cudd\_zddPortToBdd}.
  623. ZDDs are quite often used for the representation of
  624. \emph{covers}\index{function!cover}. This is normally done by
  625. associating two ZDD variables to each variable of the function. (And
  626. hence, typically, to each BDD variable.) One ZDD variable is
  627. associated with the positive literal of the BDD variable, while the
  628. other ZDD variable is associated with the negative literal. A call to
  629. \emph{Cudd\_zddVarsFromBddVars}\eidx{Cudd\_zddVarsFromBddVars} with
  630. \emph{multiplicity} equal to 2 will associate to BDD variable $i$ the
  631. two ZDD variables $2i$ and $2i+1$.
  632. If a BDD variable group tree exists when
  633. \emph{Cudd\_zddVarsFromBddVars} is called (see
  634. Section~\ref{sec:group}) the function generates a ZDD variable group
  635. tree consistent to it. In any case, all the ZDD variables derived
  636. from the same BDD variable are clustered into a group.
  637. If the ZDD for $f$ is created and later a new ZDD variable is added to
  638. the manager, the function represented by the existing ZDD changes.
  639. Suppose, for instance, that two variables are initially created, and
  640. that the ZDD for $f = x_0 + x_1$ is built. If a third variable is
  641. added, say $x_2$, then the ZDD represents $g = (x_0 + x_1) x_2'$
  642. instead. This change in function obviously applies regardless of what
  643. use is made of the ZDD\@. However, if the ZDD is used to represent a
  644. cover\index{function!cover}, the cover itself is not changed by the
  645. addition of new variable. (What changes is the
  646. characteristic\index{function!characteristic} function of the cover.)
  647. \subsection{Variable Reordering for BDDs and ADDs}
  648. \index{reordering!of BDDs and ADDs}\label{sec:reorder}
  649. The CUDD package provides a rich set of
  650. dynamic\index{reordering!dynamic} reordering algorithms. Some of them
  651. are slight variations of existing techniques
  652. \cite{Rudell93,Drechs95,Bollig95,Ishiur91,Plessi93,Jeong93}; some
  653. others have been developed specifically for this package
  654. \cite{Panda94,Panda95b}.
  655. Reordering affects a unique\index{table!unique} table. This means that
  656. BDDs and ADDs, which share the same unique table are simultaneously
  657. reordered. ZDDs, on the other hand, are reordered separately. In the
  658. following we discuss the reordering of BDDs and ADDs. Reordering for
  659. ZDDs is the subject of Section~\ref{sec:reordZ}.
  660. Reordering of the variables can be invoked directly by the application
  661. by calling \emph{Cudd\_ReduceHeap}\eidx{Cudd\_ReduceHeap}. Or it can
  662. be automatically triggered by the package when the number of nodes has
  663. reached a given threshold\index{reordering!threshold}. (The threshold
  664. is initialized and automatically adjusted after each reordering by the
  665. package.) To enable automatic dynamic reordering (also called
  666. \emph{asynchronous\index{reordering!asynchronous}} dynamic reordering
  667. in this document) the application must call
  668. \emph{Cudd\_AutodynEnable}\eidx{Cudd\_AutodynEnable}. Automatic
  669. dynamic reordering can subsequently be disabled by calling
  670. \emph{Cudd\_AutodynDisable}\eidx{Cudd\_AutodynDisable}.
  671. All reordering methods are available in both the case of direct call
  672. to \emph{Cudd\_ReduceHeap}\eidx{Cudd\_ReduceHeap} and the case of
  673. automatic invocation. For many methods, the reordering procedure is
  674. iterated until no further improvement is obtained. We call these
  675. methods the \emph{converging\index{reordering!converging}} methods.
  676. When constraints are imposed on the relative position of variables
  677. (see Section~\ref{sec:group}) the reordering methods apply inside the
  678. groups. The groups\index{reordering!group} themselves are reordered
  679. by sifting\index{reordering!sifting}. Each method is identified by a
  680. constant of the enumerated type
  681. \emph{Cudd\_ReorderingType\index{reordering!Cudd\_ReorderingType}}
  682. defined in \emph{cudd.h\index{cudd.h}} (the external
  683. header\index{header files} file of the CUDD package):
  684. \begin{description}
  685. \item[CUDD\_REORDER\_NONE\index{CUDD\_REORDER\_NONE}:] This method
  686. causes no reordering.
  687. \item[CUDD\_REORDER\_SAME\index{CUDD\_REORDER\_SAME}:] If passed to
  688. \emph{Cudd\_AutodynEnable}\eidx{Cudd\_AutodynEnable}, this
  689. method leaves the current method for automatic reordering unchanged.
  690. If passed to \emph{Cudd\_ReduceHeap}\eidx{Cudd\_ReduceHeap},
  691. this method causes the current method for automatic reordering to be
  692. used.
  693. \item[CUDD\_REORDER\_RANDOM\index{CUDD\_REORDER\_RANDOM}:] Pairs of
  694. variables are randomly chosen, and swapped in the order. The swap is
  695. performed by a series of swaps of adjacent variables. The best order
  696. among those obtained by the series of swaps is retained. The number
  697. of pairs chosen for swapping\index{reordering!random} equals the
  698. number of variables in the diagram.
  699. \item[CUDD\_REORDER\_RANDOM\_PIVOT\index{CUDD\_REORDER\_RANDOM\_PIVOT}:]
  700. Same as CUDD\_REORDER\_RANDOM, but the two variables are chosen so
  701. that the first is above the variable with the largest number of
  702. nodes, and the second is below that variable. In case there are
  703. several variables tied for the maximum number of nodes, the one
  704. closest to the root is used.
  705. \item[CUDD\_REORDER\_SIFT\index{CUDD\_REORDER\_SIFT}:] This method is
  706. an implementation of Rudell's sifting\index{reordering!sifting}
  707. algorithm \cite{Rudell93}. A simplified description of sifting is as
  708. follows: Each variable is considered in turn. A variable is moved up
  709. and down in the order so that it takes all possible positions. The
  710. best position is identified and the variable is returned to that
  711. position.
  712. In reality, things are a bit more complicated. For instance, there
  713. is a limit on the number of variables that will be sifted. This
  714. limit can be read with
  715. \emph{Cudd\_ReadSiftMaxVar}\eidx{Cudd\_ReadSiftMaxVar} and set with
  716. \emph{Cudd\_SetSiftMaxVar}\eidx{Cudd\_SetSiftMaxVar}. In addition,
  717. if the diagram grows too much while moving a variable up or down,
  718. that movement is terminated before the variable has reached one end
  719. of the order. The maximum ratio by which the diagram is allowed to
  720. grow while a variable is being sifted can be read with
  721. \emph{Cudd\_ReadMaxGrowth}\eidx{Cudd\_ReadMaxGrowth} and set with
  722. \emph{Cudd\_SetMaxGrowth}\eidx{Cudd\_SetMaxGrowth}.
  723. \item[CUDD\_REORDER\_SIFT\_CONVERGE\index{CUDD\_REORDER\_SIFT\_CONVERGE}:]
  724. This is the converging\index{reordering!converging} variant of
  725. CUDD\-\_REORDER\_SIFT.
  726. \item[CUDD\_REORDER\_SYMM\_SIFT\index{CUDD\_REORDER\_SYMM\_SIFT}:]
  727. This method is an implementation of
  728. symmetric\index{reordering!symmetric} sifting \cite{Panda94}. It is
  729. similar to sifting, with one addition: Variables that become
  730. adjacent during sifting are tested for symmetry\index{symmetry}. If
  731. they are symmetric, they are linked in a group. Sifting then
  732. continues with a group being moved, instead of a single variable.
  733. After symmetric sifting has been run,
  734. \emph{Cudd\_SymmProfile}\eidx{Cudd\_SymmProfile} can be called to
  735. report on the symmetry groups found. (Both positive and negative
  736. symmetries are reported.)
  737. \item[CUDD\_REORDER\_SYMM\_SIFT\_CONV\index{CUDD\_REORDER\_SYMM\_SIFT\_CONV}:]
  738. This is the converging\index{reordering!converging} variant of
  739. CUDD\-\_REORDER\_SYMM\_SIFT.
  740. \item[CUDD\_REORDER\_GROUP\_SIFT\index{CUDD\_REORDER\_GROUP\_SIFT}:]
  741. This method is an implementation of group\index{reordering!group}
  742. sifting \cite{Panda95b}. It is similar to symmetric sifting, but
  743. aggregation\index{aggregation} is not restricted to symmetric
  744. variables.
  745. \item[CUDD\_REORDER\_GROUP\_SIFT\_CONV\index{CUDD\_REORDER\_GROUP\_SIFT\_CONV}:]
  746. This method repeats until convergence the combination of
  747. CUDD\_REORDER\_GROUP\_SIFT and CUDD\-\_REORDER\_WINDOW4.
  748. \item[CUDD\_REORDER\_WINDOW2\index{CUDD\_REORDER\_WINDOW2}:] This
  749. method implements the window\index{reordering!window} permutation
  750. approach of Fujita \cite{Fujita91b} and Ishiura \cite{Ishiur91}.
  751. The size of the window is 2.
  752. \item[CUDD\_REORDER\_WINDOW3\index{CUDD\_REORDER\_WINDOW3}:] Similar
  753. to CUDD\_REORDER\_WINDOW2, but with a window of size 3.
  754. \item[CUDD\_REORDER\_WINDOW4\index{CUDD\_REORDER\_WINDOW4}:] Similar
  755. to CUDD\_REORDER\_WINDOW2, but with a window of size 4.
  756. \item[CUDD\_REORDER\_WINDOW2\_CONV\index{CUDD\_REORDER\_WINDOW2\_CONV}:]
  757. This is the converging\index{reordering!converging} variant of
  758. CUDD\-\_REORDER\_WINDOW2.
  759. \item[CUDD\_REORDER\_WINDOW3\_CONV\index{CUDD\_REORDER\_WINDOW3\_CONV}:]
  760. This is the converging variant of CUDD\-\_REORDER\_WINDOW3.
  761. \item[CUDD\_REORDER\_WINDOW4\_CONV\index{CUDD\_REORDER\_WINDOW4\_CONV}:]
  762. This is the converging variant of CUDD\-\_REORDER\_WINDOW4.
  763. \item[CUDD\_REORDER\_ANNEALING\index{CUDD\_REORDER\_ANNEALING}:] This
  764. method is an implementation of simulated
  765. annealing\index{reordering!simulated annealing} for variable
  766. ordering, vaguely resemblant of the algorithm of \cite{Bollig95}.
  767. This method is potentially very slow.
  768. \item[CUDD\_REORDER\_GENETIC:\index{CUDD\_REORDER\_GENETIC}] This
  769. method is an implementation of a genetic\index{reordering!genetic}
  770. algorithm for variable ordering, inspired by the work of Drechsler
  771. \cite{Drechs95}. This method is potentially very slow.
  772. \item[CUDD\_REORDER\_EXACT\index{CUDD\_REORDER\_EXACT}:] This method
  773. implements a dynamic programming approach to
  774. exact\index{reordering!exact} reordering
  775. \cite{Held62,Friedman90,Ishiur91}, with improvements described in
  776. \cite{Jeong93}. It only stores one BDD at the time. Therefore, it is
  777. relatively efficient in terms of memory. Compared to other
  778. reordering strategies, it is very slow, and is not recommended for
  779. more than 16 variables.
  780. \end{description}
  781. So far we have described methods whereby the package selects an order
  782. automatically. A given order of the variables can also be imposed by
  783. calling \emph{Cudd\_ShuffleHeap}\eidx{Cudd\_ShuffleHeap}.
  784. \subsection{Grouping Variables}
  785. \index{variable!groups}\label{sec:group}
  786. CUDD allows the application to specify constraints on the positions of
  787. group of variables. It is possible to request that a group of
  788. contiguous variables be kept contiguous by the reordering procedures.
  789. It is also possible to request that the relative order of some groups
  790. of variables be left unchanged. The constraints on the order are
  791. specified by means of a tree\index{variable!tree}, which is created in
  792. one of two ways:
  793. \begin{itemize}
  794. \item By calling \emph{Cudd\_MakeTreeNode}\eidx{Cudd\_MakeTreeNode}.
  795. \item By calling the functions of the MTR\index{libraries!mtr} library
  796. (part of the distribution), and by registering the result with the
  797. manager using \emph{Cudd\_SetTree}\eidx{Cudd\_SetTree}. The current
  798. tree registered with the manager can be read with
  799. \emph{Cudd\_ReadTree}\eidx{Cudd\_ReadTree}.
  800. \end{itemize}
  801. Each node in the tree represents a range of variables. The lower bound
  802. of the range is given by the \emph{low} field of the node, and the
  803. size of the group is given by the \emph{size} field of the
  804. node.\footnote{When the variables in a group are reordered, the
  805. association between the \emph{low} field and the index of the first
  806. variable in the group is lost. The package updates the tree to keep
  807. track of the changes. However, the application cannot rely on
  808. \emph{low} to determine the position of variables.} The variables
  809. in each range are kept contiguous. Furthermore, if a node is marked
  810. with the MTR\_FIXED\index{MTR\_FIXED} flag, then the relative order of
  811. the variable ranges associated to its children is not changed. As an
  812. example, suppose the initial variable order is:
  813. \begin{verbatim}
  814. x0, y0, z0, x1, y1, z1, ... , x9, y9, z9.
  815. \end{verbatim}
  816. Suppose we want to keep each group of three variables with the same
  817. index (e.g., \verb|x3, y3, z3|) contiguous, while allowing the package
  818. to change the order of the groups. We can accomplish this with the
  819. following code:
  820. \begin{verbatim}
  821. for (i = 0; i < 10; i++) {
  822. (void) Cudd_MakeTreeNode(manager,i*3,3,MTR_DEFAULT);
  823. }
  824. \end{verbatim}
  825. If we want to keep the order within each group of variables
  826. fixed (i.e., \verb|x| before \verb|y| before \verb|z|) we need to
  827. change MTR\_DEFAULT\index{MTR\_DEFAULT} into MTR\_FIXED.
  828. The \emph{low} parameter passed to
  829. \emph{Cudd\_MakeTreeNode}\eidx{Cudd\_MakeTreeNode} is the index of a
  830. variable (as opposed to its level or position in the order). The
  831. group tree\index{variable!tree} can be created at any time. The
  832. result obviously depends on the variable order in effect at creation
  833. time.
  834. It is possible to create a variable group tree also before the
  835. variables themselves are created. The package assumes in this case
  836. that the index of the variables not yet in existence will equal their
  837. position in the order when they are created. Therefore, applications
  838. that rely on
  839. \emph{Cudd\_bddNewVarAtLevel}\eidx{Cudd\_bddNewVarAtLevel} or
  840. \emph{Cudd\_addNewVarAtLevel}\eidx{Cudd\_addNewVarAtLevel} to create
  841. new variables have to create the variables before they group them.
  842. The reordering procedure will skip all groups whose variables are not
  843. yet in existence. For groups that are only partially in existence, the
  844. reordering procedure will try to reorder the variables already
  845. instantiated, without violating the adjacency constraints.
  846. \subsection{Variable Reordering for ZDDs}
  847. \index{reordering!of ZDDs}\label{sec:reordZ}
  848. Reordering of ZDDs is done in much the same way as the reordering of
  849. BDDs and ADDs. The functions corresponding to \emph{Cudd\_ReduceHeap}
  850. and \emph{Cudd\_ShuffleHeap} are
  851. \emph{Cudd\_zddReduceHeap}\eidx{Cudd\_zddReduceHeap} and
  852. \emph{Cudd\_zddShuffleHeap}\eidx{Cudd\_zddShuffleHeap}. To enable
  853. dynamic\index{reordering!dynamic} reordering, the application must
  854. call \emph{Cudd\_AutodynEnableZdd}\eidx{Cudd\_AutodynEnableZdd}, and
  855. to disable dynamic reordering, it must call
  856. \emph{Cudd\_AutodynDisableZdd}\eidx{Cudd\_AutodynDisableZdd}. In the
  857. current implementation, however, the choice of reordering methods for
  858. ZDDs is more limited. Specifically, these methods are available:
  859. \begin{description}
  860. \item[CUDD\_REORDER\_NONE\index{CUDD\_REORDER\_NONE};]
  861. \item[CUDD\_REORDER\_SAME\index{CUDD\_REORDER\_SAME};]
  862. \item[CUDD\_REORDER\_RANDOM\index{CUDD\_REORDER\_RANDOM};]
  863. \item[CUDD\_REORDER\_RANDOM\_PIVOT\index{CUDD\_REORDER\_RANDOM\_PIVOT};]
  864. \item[CUDD\_REORDER\_SIFT\index{CUDD\_REORDER\_SIFT};]
  865. \item[CUDD\_REORDER\_SIFT\_CONVERGE\index{CUDD\_REORDER\_SIFT\_CONVERGE};]
  866. \item[CUDD\_REORDER\_SYMM\_SIFT\index{CUDD\_REORDER\_SYMM\_SIFT};]
  867. \item[CUDD\_REORDER\_SYMM\_SIFT\_CONV\index{CUDD\_REORDER\_SYMM\_SIFT\_CONV}.]
  868. \end{description}
  869. To create ZDD variable groups, the application calls
  870. \emph{Cudd\_MakeZddTreeNode}\eidx{Cudd\_MakeZddTreeNode}.
  871. \subsection{Keeping Consistent Variable Orders for BDDs and ZDDs}
  872. \label{sec:consist}
  873. Several applications that manipulate both BDDs and ZDDs benefit from
  874. keeping a fixed correspondence between the order of the BDD variables
  875. and the order of the ZDD variables. If each BDD variable corresponds
  876. to a group of ZDD variables, then it is often desirable that the
  877. groups of ZDD variables be in the same order as the corresponding BDD
  878. variables. CUDD allows the ZDD order to track the BDD order and vice
  879. versa. To have the ZDD order track the BDD order, the application
  880. calls \emph{Cudd\_zddRealignEnable}\eidx{Cudd\_zddRealignEnable}. The
  881. effect of this call can be reversed by calling
  882. \emph{Cudd\_zddRealignDisable}\eidx{Cudd\_zddRealignDisable}. When
  883. ZDD realignment is in effect, automatic reordering of ZDDs should be
  884. disabled.
  885. \subsection{Hooks}
  886. \index{hook}\label{sec:hooks}
  887. Hooks in CUDD are lists of application-specified functions to be run on
  888. certain occasions. Each hook is identified by a constant of the
  889. enumerated type \emph{Cudd\_HookType}\eidx{Cudd\_HookType}. In Version
  890. \vnumber\ hooks are defined for these occasions:
  891. \begin{itemize}
  892. \item before garbage collection (CUDD\_PRE\_GC\_HOOK);
  893. \item after garbage collection (CUDD\_POST\_GC\_HOOK);
  894. \item before variable reordering (CUDD\_PRE\_REORDERING\_HOOK);
  895. \item after variable reordering (CUDD\_POST\_REORDERING\_HOOK).
  896. \end{itemize}
  897. A function added to a hook receives a pointer to the manager, a
  898. pointer to a constant string, and a pointer to void as arguments; it
  899. must return 1 if successful; 0 otherwise. The second argument is one
  900. of ``DD,'' ``BDD,'' and ``ZDD.'' This allows the hook functions to
  901. tell the type of diagram for which reordering or garbage collection
  902. takes place. The third argument varies depending on the hook. The hook
  903. functions called before or after garbage collection\index{garbage
  904. collection!hooks} do not use it. The hook functions called before
  905. reordering\index{reordering!hooks} are passed, in addition to the
  906. pointer to the manager, also the method used for reordering. The hook
  907. functions called after reordering are passed the start time. To add a
  908. function to a hook, one uses \emph{Cudd\_AddHook}\eidx{Cudd\_AddHook}.
  909. The function of a given hook are called in the order in which they
  910. were added to the hook. For sample hook functions, one may look at
  911. \emph{Cudd\_StdPreReordHook}\eidx{Cudd\_StdPreReordHook} and
  912. \emph{Cudd\_StdPostReordHook}\eidx{Cudd\_StdPostReordHook}.
  913. \subsection{Timeouts and Limits}
  914. \index{timeout}\label{sec:timeouts}
  915. It is possible to set a time limit for a manager with
  916. \emph{Cudd\_SetTimeLimit}\eidx{Cudd\_SetTimeLimit}. Once set, the
  917. time available to the manager can be inspected and modified through
  918. other API functions.
  919. (\emph{Cudd\_TimeLimited}\eidx{Cudd\_TimeLimited},
  920. \emph{Cudd\_ReadTimeLimit}\eidx{Cudd\_ReadTimeLimit},
  921. \emph{Cudd\_UnsetTimeLimit}\eidx{Cudd\_UnsetTimeLimit},
  922. \emph{Cudd\_UpdateTimeLimit}\eidx{Cudd\_UpdateTimeLimit},
  923. \emph{Cudd\_IncreaseTimeLimit}\eidx{Cudd\_IncreaseTimeLimit}.) CUDD
  924. checks for expiration from time to time. When deadline has
  925. expired, it returns NULL from the call in progress, but it leaves the
  926. manager in a consistent state. The invoking application must be
  927. designed to handle the NULL values returned.
  928. When reordering, if a timout is approaching, CUDD will quit reordering
  929. to give the application a chance to finish some computation.
  930. It is also possible to invoke some functions that return NULL if they
  931. cannot complete without creating more than a set number of nodes.
  932. See, for instance, \emph{Cudd\_bddAndLimit}\eidx{Cudd\_bddAndLimit}.
  933. Only functions that are documented to check for the number of
  934. generated nodes do so. (Their names end in ``limit.'') These
  935. functions set the error code to
  936. \emph{CUDD\_TOO\_MANY\_NODES}\eidx{CUDD\_TOO\_MANY\_NODES} when they
  937. return NULL because of too many nodes. The error code can be
  938. inspected with \emph{Cudd\_ReadErrorCode}\eidx{Cudd\_ReadErrorCode}
  939. and cleared with
  940. \emph{Cudd\_ClearErrorCode}\eidx{Cudd\_ClearErrorCode}.
  941. \subsection{Writing Decision Diagrams to a File}
  942. \label{sec:dump}
  943. The CUDD package provides several functions to write decision diagrams
  944. to a file. \emph{Cudd\_DumpBlif}\eidx{Cudd\_DumpBlif} writes a
  945. file in \emph{blif} format. It is restricted to BDDs. The diagrams
  946. are written as a network of multiplexers, one multiplexer for each
  947. internal node of the BDD.
  948. \emph{Cudd\_DumpDot}\eidx{Cudd\_DumpDot} produces input suitable to
  949. the graph-drawing\index{graph!drawing} program
  950. \href{http://www.graphviz.org}{\emph{dot}} written by Eleftherios
  951. Koutsofios and Stephen C. North. An example of drawing produced by
  952. dot from the output of \emph{Cudd\_DumpDot} is shown in
  953. Figure~\ref{fi:phase}. \emph{Cudd\_DumpDot} is restricted to BDDs and
  954. ADDs; \emph{Cudd\_zddDumpDot} may be used to draw ZDDs.
  955. \begin{figure}
  956. \centerline{\includegraphics[height=15.5cm]{@top_srcdir@/doc/phase.pdf}}
  957. \caption{A BDD representing a phase constraint for the optimization of
  958. fixed-polarity Reed-Muller forms. The label of each node is the
  959. unique part of the node address. All nodes on the same level
  960. correspond to the same variable, whose name is shown at the left of
  961. the diagram. Dotted lines indicate complement\index{arc!complement}
  962. arcs. Dashed lines indicate regular\index{arc!regular} ``else''
  963. arcs.\label{fi:phase}}
  964. \end{figure}
  965. \emph{Cudd\_zddDumpDot}\eidx{Cudd\_zddDumpDot} is the analog of
  966. \emph{Cudd\_DumpDot} for ZDDs.
  967. \emph{Cudd\_DumpDaVinci}\eidx{Cudd\_DumpDaVinci} produces input
  968. suitable to the graph-drawing\index{graph!drawing} program
  969. \href{ftp://ftp.uni-bremen.de/pub/graphics/daVinci}{\emph{daVinci}}
  970. developed at the University of Bremen. It is restricted to BDDs and
  971. ADDs.
  972. Functions are also available to produce the input format of
  973. \emph{DDcal} (see Section~\ref{sec:getFriends}) and factored forms.
  974. \subsection{Saving and Restoring BDDs}
  975. \label{sec:save-restore}
  976. The \href{ftp://ftp.polito.it/pub/research/dddmp/}{\emph{dddmp}}
  977. library\index{libraries!dddmp} by Gianpiero Cabodi and Stefano Quer
  978. allows a CUDD application to save BDDs to disk in compact form for
  979. later retrieval. See the library's own documentation for the details.
  980. %----------------------------------------
  981. \section{Programmer's Manual}
  982. \label{sec:prog}
  983. This section provides additional detail on the workings of the CUDD
  984. package and on the programming conventions followed in its writing.
  985. The additional detail should help those who want to write procedures
  986. that directly manipulate the CUDD data structures.
  987. \subsection{Compiling and Linking}
  988. \index{compiling}\label{sec:compileInt}
  989. If you plan to use the CUDD package as a clear box\index{box!clear}
  990. (for instance, you want to write a procedure that traverses a decision
  991. diagram) you need to add
  992. \begin{verbatim}
  993. #include "cuddInt.h"
  994. \end{verbatim}
  995. to your source files. In addition, you should link \verb|libcudd.a| to
  996. your executable. Some platforms require specific compiler and linker
  997. flags. Refer to the \texttt{Makefile} in the top level directory of
  998. the distribution.
  999. \subsection{Reference Counts}
  1000. \index{node!reference count}\label{sec:ref}
  1001. Garbage\index{garbage collection} collection in the CUDD package is
  1002. based on reference counts. Each node stores the sum of the external
  1003. references and internal references. An internal BDD or ADD node is
  1004. created by a call to \emph{cuddUniqueInter}\eidx{cuddUniqueInter}, an
  1005. internal ZDD node is created by a call to
  1006. \emph{cuddUniqueInterZdd}\eidx{cuddUniqueInterZdd}, and a
  1007. terminal\index{node!constant} node is created by a call to
  1008. \emph{cuddUniqueConst}\eidx{cuddUniqueConst}. If the node returned by
  1009. these functions is new, its reference count is zero. The function
  1010. that calls \emph{cuddUniqueInter}\eidx{cuddUniqueInter},
  1011. \emph{cuddUniqueInterZdd}\eidx{cuddUniqueInterZdd}, or
  1012. \emph{cuddUniqueConst}\eidx{cuddUniqueConst} is responsible for
  1013. increasing the reference count of the node. This is accomplished by
  1014. calling \emph{Cudd\_Ref}\eidx{Cudd\_Ref}.
  1015. When a function is no longer needed by an application, the memory used
  1016. by its diagram can be recycled by calling
  1017. \emph{Cudd\_RecursiveDeref}\eidx{Cudd\_RecursiveDeref} (BDDs and ADDs)
  1018. or \emph{Cudd\_RecursiveDerefZdd}\eidx{Cudd\_RecursiveDerefZdd}
  1019. (ZDDs). These functions decrease the reference \index{node!reference
  1020. count} count of the node passed to them. If the reference count
  1021. becomes 0, then two things happen:
  1022. \begin{enumerate}
  1023. \item The node is declared ``dead\index{node!dead};'' this entails
  1024. increasing the counters\index{statistical counters} of the dead
  1025. nodes. (One counter for the subtable\index{subtable} to which the
  1026. node belongs, and one global counter for the
  1027. unique\index{table!unique} table to which the node belongs.) The
  1028. node itself is not affected.
  1029. \item The function is recursively called on the two children of the
  1030. node.
  1031. \end{enumerate}
  1032. For instance, if the diagram of a function does not share any nodes
  1033. with other diagrams, then calling
  1034. \emph{Cudd\_RecursiveDeref}\eidx{Cudd\_RecursiveDeref} or
  1035. \emph{Cudd\_RecursiveDerefZdd}\eidx{Cudd\_RecursiveDerefZdd} on its
  1036. root will cause all the nodes of the diagram to become dead.
  1037. When the number of dead nodes reaches a given level (dynamically
  1038. determined by the package) garbage collection takes place. During
  1039. garbage\index{garbage collection} collection dead nodes are returned
  1040. to the node free list\index{free list}.
  1041. When a new node is created, it is important to increase its
  1042. reference\index{node!reference count} count before one of the two
  1043. following events occurs:
  1044. \begin{enumerate}
  1045. \item A call to \emph{cuddUniqueInter}\eidx{cuddUniqueInter},
  1046. to \emph{cuddUniqueInterZdd}\eidx{cuddUniqueInterZdd}, to
  1047. \emph{cuddUniqueConst}\eidx{cuddUniqueConst}, or to a
  1048. function that may eventually cause a call to them.
  1049. \item A call to
  1050. \emph{Cudd\_RecursiveDeref}\eidx{Cudd\_RecursiveDeref}, to
  1051. \emph{Cudd\_RecursiveDerefZdd}\eidx{Cudd\_RecursiveDerefZdd}, or to
  1052. a function that may eventually cause a call to them.
  1053. \end{enumerate}
  1054. In practice, it is recommended to increase the reference count as soon
  1055. as the returned pointer has been tested for not being NULL.
  1056. \subsubsection{NULL Return Values}
  1057. \label{sec:null}
  1058. The interface to the memory management functions (e.g., malloc) used
  1059. by CUDD intercepts NULL return values and calls a handler. The
  1060. default handler exits with an error message. If the application does
  1061. not install another handler, therefore, a NULL return value from an
  1062. exported function of CUDD signals an internal error.
  1063. If the aplication, however, installs another handler that lets
  1064. execution continue, a NULL pointer returned by an exported function
  1065. typically indicates that the process has run out of memory.
  1066. \emph{Cudd\_ReadErrorCode}\eidx{Cudd\_ReadErrorCode} can be used to
  1067. ascertain the nature of the problem.
  1068. An application that tests for the result being NULL can try some
  1069. remedial action, if it runs out of memory. For instance, it may free
  1070. some memory that is not strictly necessary, or try a slower algorithm
  1071. that takes less space. As an example, CUDD overrides the default
  1072. handler when trying to enlarge the cache or increase the number of
  1073. slots of the unique table. If the allocation fails, the package prints
  1074. out a message and continues without resizing the cache.
  1075. \subsubsection{\emph{Cudd\_RecursiveDeref} vs.\ \emph{Cudd\_Deref}}
  1076. \label{sec:deref}
  1077. It is often the case that a recursive procedure has to protect the
  1078. result it is going to return, while it disposes of intermediate
  1079. results. (See the previous discussion on when to increase reference
  1080. counts.) Once the intermediate results have been properly disposed
  1081. of, the final result must be returned to its pristine state, in which
  1082. the root node may have a reference count of 0. One cannot use
  1083. \emph{Cudd\_RecursiveDeref}\eidx{Cudd\_RecursiveDeref} (or
  1084. \emph{Cudd\_RecursiveDerefZdd}) for this purpose, because it may
  1085. erroneously make some nodes dead. Therefore, the package provides a
  1086. different function: \emph{Cudd\_Deref}\eidx{Cudd\_Deref}. This
  1087. function is not recursive, and does not change the dead node counts.
  1088. Its use is almost exclusively the one just described: Decreasing the
  1089. reference count of the root of the final result before returning from
  1090. a recursive procedure.
  1091. \subsubsection{When Increasing the Reference Count is Unnecessary}
  1092. \index{node!reference count}\label{sec:noref}
  1093. When a copy of a predefined constant\index{node!constant} or of a
  1094. simple BDD variable is needed for comparison purposes, then calling
  1095. \emph{Cudd\_Ref}\eidx{Cudd\_Ref} is not necessary, because these
  1096. simple functions are guaranteed to have reference counts greater than
  1097. 0 at all times. If no call to \emph{Cudd\_Ref} is made, then no
  1098. attempt to free the diagram by calling
  1099. \emph{Cudd\_RecursiveDeref}\eidx{Cudd\_RecursiveDeref} or
  1100. \emph{Cudd\_RecursiveDerefZdd}\eidx{Cudd\_RecursiveDerefZdd} should be
  1101. made.
  1102. \subsubsection{Saturating Increments and Decrements}
  1103. \index{saturating!increments}\index{saturating!decrements}\label{sec:satur}
  1104. On 32-bit machines, the CUDD package stores the
  1105. reference\index{node!reference count} counts in unsigned short int's.
  1106. For large diagrams, it is possible for some reference counts to exceed
  1107. the capacity of an unsigned short int. Therefore, increments and
  1108. decrements of reference counts are \emph{saturating}. This means that
  1109. once a reference count has reached the maximum possible value, it is
  1110. no longer changed by calls to \emph{Cudd\_Ref},
  1111. \emph{Cudd\_RecursiveDeref}\eidx{Cudd\_RecursiveDeref},
  1112. \emph{Cudd\_RecursiveDerefZdd}\eidx{Cudd\_RecursiveDerefZdd}, or
  1113. \emph{Cudd\_Deref}\eidx{Cudd\_Deref}. As a consequence, some nodes
  1114. that have no references may not be declared dead. This may result in a
  1115. small waste of memory, which is normally more than offset by the
  1116. reduction in size of the node structure.
  1117. When using 64-bit pointers, there is normally no memory advantage from
  1118. using short int's instead of int's in a DdNode. Therefore, increments
  1119. and decrements are not saturating in that case. What option is in
  1120. effect depends on two macros, SIZEOF\_VOID\_P\index{SIZEOF\_VOID\_P}
  1121. and SIZEOF\_INT\index{SIZEOF\_INT}, defined in the configuration
  1122. header\index{header files} file (\emph{config.h}\index{config.h}).
  1123. The increments and decrements of the reference counts are performed
  1124. using two macros: \emph{cuddSatInc}\eidx{cuddSatInc} and
  1125. \emph{cuddSatDec}\eidx{cuddSatDec}, whose definitions depend on
  1126. SIZEOF\_VOID\_P\index{SIZEOF\_VOID\_P} and
  1127. SIZEOF\_INT\index{SIZEOF\_INT}.
  1128. \subsection{Complement Arcs}
  1129. \index{arc!complement}\label{sec:compl}
  1130. If ADDs are restricted to use only the constants 0 and 1, they behave
  1131. like BDDs without complement arcs. It is normally easier to write code
  1132. that manipulates 0-1 ADDs, than to write code for BDDs. However,
  1133. complementation is trivial with complement arcs, and is not trivial
  1134. without. As a consequence, with complement arcs it is possible to
  1135. check for more terminal cases and it is possible to apply De Morgan's
  1136. laws to reduce problems that are essentially identical to a standard
  1137. form. This in turn increases the utilization of the cache\index{cache}.
  1138. The complement attribute is stored in the least significant bit of the
  1139. ``else'' pointer of each node. An external pointer to a function can
  1140. also be complemented. The ``then'' pointer to a node, on the other
  1141. hand, is always \emph{regular\index{arc!regular}}. It is a mistake to
  1142. use a complement\index{arc!complement} pointer as it is to address
  1143. memory. Instead, it is always necessary to obtain a regular version
  1144. of it. This is normally done by calling
  1145. \emph{Cudd\_Regular}\eidx{Cudd\_Regular}. It is also a mistake to
  1146. call \emph{cuddUniqueInter}\eidx{cuddUniqueInter} with a complemented
  1147. ``then'' child as argument. The calling procedure must apply De
  1148. Morgan's laws by complementing both pointers passed to
  1149. \emph{cuddUniqueInter}\eidx{cuddUniqueInter} and then taking the
  1150. complement of the result.
  1151. \subsection{The Cache}
  1152. \index{cache}\label{sec:cache}
  1153. Each entry of the cache consists of five fields: The operator, three
  1154. pointers to operands and a pointer to the result. The operator and the
  1155. three pointers to the operands are combined to form three words. The
  1156. combination relies on two facts:
  1157. \begin{itemize}
  1158. \item Most operations have one or two operands. A few bits are
  1159. sufficient to discriminate all three-operands operations.
  1160. \item All nodes are aligned to 16-byte boundaries. (32-byte boundaries
  1161. if 64-bit pointers are used.) Hence, there are a few bits available
  1162. to distinguish the three-operand operations from te others and to
  1163. assign unique codes to them.
  1164. \end{itemize}
  1165. The cache does not contribute to the reference
  1166. \index{node!reference count}
  1167. counts of the nodes. The fact that the cache contains a
  1168. pointer to a node does not imply that the node is alive. Instead, when
  1169. garbage\index{garbage collection} collection takes place, all entries
  1170. of the cache pointing to dead\index{node!dead} nodes are cleared.
  1171. The cache is also cleared (of all entries) when dynamic
  1172. reordering\index{reordering} takes place. In both cases, the entries
  1173. removed from the cache are about to become invalid.
  1174. All operands and results in a cache entry must be pointers to
  1175. DdNodes\index{DdNode}. If a function produces more than one result,
  1176. or uses more than three arguments, there are currently two solutions:
  1177. \begin{itemize}
  1178. \item Build a separate, local, cache\index{cache!local}. (Using, for
  1179. instance, the \emph{st} library\index{libraries!st}.)
  1180. \item Combine multiple results, or multiple operands, into a single
  1181. diagram, by building a ``multiplexing structure'' with reserved
  1182. variables.
  1183. \end{itemize}
  1184. Support of the former solution is under development. (See
  1185. \texttt{cuddLCache.c}..) Support for the latter solution may be
  1186. provided in future versions of the package.
  1187. There are three sets of interface\index{interface!cache} functions to
  1188. the cache. The first set is for functions with three operands:
  1189. \emph{cuddCacheInsert}\eidx{cuddCacheInsert} and
  1190. \emph{cuddCacheLookup}\eidx{cuddCacheLookup}. The second set is for
  1191. functions with two operands:
  1192. \emph{cuddCacheInsert2}\eidx{cuddCacheInsert2} and
  1193. \emph{cuddCacheLookup2}\eidx{cuddCacheLookup2}. The second set is for
  1194. functions with one operand:
  1195. \emph{cuddCacheInsert1}\eidx{cuddCacheInsert1} and
  1196. \emph{cuddCacheLookup1}\eidx{cuddCacheLookup1}. The second set is
  1197. slightly faster than the first, and the third set is slightly faster
  1198. than the second.
  1199. \subsubsection{Cache Sizing}
  1200. \index{cache!sizing}\label{sec:cache-sizing}
  1201. The size of the cache can increase during the execution of an
  1202. application. (There is currently no way to decrease the size of the
  1203. cache, though it would not be difficult to do it.) When a cache miss
  1204. occurs, the package uses the following criteria to decide whether to
  1205. resize the cache:
  1206. \begin{enumerate}
  1207. \item If the cache already exceeds the limit given by the
  1208. \texttt{maxCache\index{maxCache}} field of the manager, no resizing
  1209. takes place. The limit is the minimum of two values: a value set at
  1210. initialization time and possibly modified by the application, which
  1211. constitutes the hard limit beyond which the cache will never grow;
  1212. and a number that depends on the current total number of slots in
  1213. the unique\index{table!unique} table.
  1214. \item If the cache is not too large already, resizing is decided based
  1215. on the hit rate. The policy adopted by the CUDD package is
  1216. ``reward-based\index{cache!reward-based resizing}.'' If the cache hit
  1217. rate is high, then it is worthwhile to increase the size of the
  1218. cache.
  1219. \end{enumerate}
  1220. When resizing takes place, the statistical counters \index{statistical
  1221. counters} used to compute the hit rate are reinitialized so as to
  1222. prevent immediate resizing. The number of entries is doubled.
  1223. The rationale for the ``reward-based\index{cache!reward-based resizing}''
  1224. policy is as follows. In many BDD/ADD applications the hit rate is
  1225. not very sensitive to the size of the cache: It is primarily a
  1226. function of the problem instance at hand. If a large hit rate is
  1227. observed, chances are that by using a large cache, the results of
  1228. large problems (those that would take longer to solve) will survive in
  1229. the cache without being overwritten long enough to cause a valuable
  1230. cache hit. Notice that when a large problem is solved more than once,
  1231. so are its recursively generated subproblems. If the hit rate is
  1232. low, the probability of large problems being solved more than once is
  1233. low.
  1234. The other observation about the cache sizing policy is that there is
  1235. little point in keeping a cache which is much larger than the unique
  1236. table. Every time the unique table ``fills up,'' garbage collection is
  1237. invoked and the cache is cleared of all dead entries. A cache that is
  1238. much larger than the unique\index{table!unique} table is therefore
  1239. less than fully utilized.
  1240. \subsubsection{Local Caches}
  1241. \index{cache!local}\label{sec:local-caches}
  1242. Sometimes it may be necessary or convenient to use a local cache. A
  1243. local cache can be lossless\index{cache!lossless} (no results are ever
  1244. overwritten), or it may store objects for which
  1245. canonical\index{canonical} representations are not available. One
  1246. important fact to keep in mind when using a local cache is that local
  1247. caches are not cleared during garbage\index{garbage collection}
  1248. collection or before reordering. Therefore, it is necessary to
  1249. increment the reference\index{node!reference count} count of all nodes
  1250. pointed by a local cache. (Unless their reference counts are
  1251. guaranteed positive in some other way. One such way is by including
  1252. all partial results in the global result.) Before disposing of the
  1253. local cache, all elements stored in it must be passed to
  1254. \emph{Cudd\_RecursiveDeref}\eidx{Cudd\_RecursiveDeref}. As consequence
  1255. of the fact that all results in a local cache are referenced, it is
  1256. generally convenient to store in the local cache also the result of
  1257. trivial problems, which are not usually stored in the global cache.
  1258. Otherwise, after a recursive call, it is difficult to tell whether the
  1259. result is in the cache, and therefore referenced, or not in the cache,
  1260. and therefore not referenced.
  1261. An alternative approach to referencing the results in the local caches
  1262. is to install hook functions (see Section~\ref{sec:hooks}) to be
  1263. executed before garbage collection.
  1264. \subsection{The Unique Table}
  1265. \index{table!unique}\label{sec:unique}
  1266. A recursive procedure typically splits the operands by expanding with
  1267. respect to the topmost variable. Topmost in this context refers to the
  1268. variable that is closest to the roots in the current variable order.
  1269. The nodes, on the other hand, hold the index, which is invariant with
  1270. reordering. Therefore, when splitting, one must use the
  1271. permutation\index{variable!permutation} array maintained by the
  1272. package to get the right level. Access to the permutation array is
  1273. provided by the macro \emph{cuddI}\eidx{cuddI} for BDDs and ADDs,
  1274. and by the macro \emph{cuddIZ}\eidx{cuddIZ} for ZDDs.
  1275. The unique table consists of as many hash\index{table!hash} tables as
  1276. there are variables in use. These has tables are called \emph{unique
  1277. subtables}. The sizes of the unique subtables are determined by two
  1278. criteria:
  1279. \begin{enumerate}
  1280. \item The collision\index{cache!collision list} lists should be short
  1281. to keep access time down.
  1282. \item There should be enough room for dead\index{node!dead} nodes, to
  1283. prevent too frequent garbage\index{garbage collection} collections.
  1284. \end{enumerate}
  1285. While the first criterion is fairly straightforward to implement, the
  1286. second leaves more room to creativity. The CUDD package tries to
  1287. figure out whether more dead node should be allowed to increase
  1288. performance. (See also Section~\ref{sec:params}.) There are two
  1289. reasons for not doing garbage collection too often. The obvious one is
  1290. that it is expensive. The second is that dead nodes may be
  1291. reclaimed\index{node!reclaimed}, if they are the result of a
  1292. successful cache lookup. Hence dead nodes may provide a substantial
  1293. speed-up if they are kept around long enough. The usefulness of
  1294. keeping many dead nodes around varies from application to application,
  1295. and from problem instance to problem instance. As in the sizing of the
  1296. cache, the CUDD package adopts a
  1297. ``reward-based\index{table!unique!reward-based resizing}'' policy to
  1298. decide how much room should be used for the unique table. If the
  1299. number of dead nodes reclaimed is large compared to the number of
  1300. nodes directly requested from the memory manager, then the CUDD
  1301. package assumes that it will be beneficial to allow more room for the
  1302. subtables, thereby reducing the frequency of garbage collection. The
  1303. package does so by switching between two modes of operation:
  1304. \begin{enumerate}
  1305. \item Fast growth\index{table!unique!fast growth}: In this mode, the
  1306. ratio of dead nodes to total nodes required for garbage collection
  1307. is higher than in the slow growth mode to favor resizing
  1308. of the subtables.
  1309. \item Slow growth\index{table!unique!slow growth}: In this
  1310. mode keeping many dead nodes around is not as important as
  1311. keeping memory requirements low.
  1312. \end{enumerate}
  1313. Switching from one mode to the other is based on the following
  1314. criteria:
  1315. \begin{enumerate}
  1316. \item If the unique table is already large, only slow growth is
  1317. possible.
  1318. \item If the table is small and many dead nodes are being reclaimed,
  1319. then fast growth is selected.
  1320. \end{enumerate}
  1321. This policy is especially effective when the diagrams being
  1322. manipulated have lots of recombination. Notice the interplay of the
  1323. cache sizing and unique sizing: Fast growth normally occurs when the
  1324. cache hit rate is large. The cache and the unique table then grow in
  1325. concert, preserving a healthy balance between their sizes.
  1326. \subsection{Allowing Asynchronous Reordering}
  1327. \index{reordering!asynchronous}\label{sec:async}
  1328. Asynchronous reordering is the reordering that is triggered
  1329. automatically by the increase of the number of nodes. Asynchronous
  1330. reordering takes place when a new internal node must be created, and
  1331. the number of nodes has reached a given
  1332. threshold\index{reordering!threshold}. (The threshold is adjusted by
  1333. the package every time reordering takes place.)
  1334. Those procedures that do not create new nodes (e.g., procedures that
  1335. count the number of nodes or minterms\index{function!minterms}) need
  1336. not worry about asynchronous reordering: No special precaution is
  1337. necessary in writing them.
  1338. Procedures that only manipulate decision diagrams through the exported
  1339. functions of the CUDD package also need not concern themselves with
  1340. asynchronous reordering. (See Section~\ref{sec:nodes} for the
  1341. exceptions.)
  1342. The remaining class of procedures is composed of functions that visit
  1343. the diagrams and may create new nodes. All such procedures in the CUDD
  1344. package are written so that they can be interrupted by dynamic
  1345. reordering. The general approach followed goes under the name of
  1346. ``abort and retry\index{reordering!abort and retry}.'' As the name
  1347. implies, a computation that is interrupted by dynamic reordering is
  1348. aborted and tried again.
  1349. A recursive procedure that can be interrupted by dynamic reordering
  1350. (an interruptible\index{reordering!interruptible procedure} procedure
  1351. from now on) is composed of two functions. One is responsible for the
  1352. real computation. The other is a simple
  1353. wrapper\index{reordering!function wrapper}, which tests whether
  1354. reordering occurred and restarts the computation if it did.
  1355. Asynchronous reordering of BDDs and ADDs can only be triggered inside
  1356. \emph{cuddUniqueInter}\eidx{cuddUniqueInter}, when a new node is about
  1357. to be created. Likewise, asynchronous reordering of ZDDs can only be
  1358. triggered inside \emph{cuddUniqueInterZdd}\eidx{cuddUniqueInterZdd}.
  1359. When reordering is triggered, three things happen:
  1360. \begin{enumerate}
  1361. \item \emph{cuddUniqueInter}\eidx{cuddUniqueInter} returns a NULL
  1362. value;
  1363. \item The flag \emph{reordered} of the manager is set to 1. (0 means
  1364. no reordering, while 2 indicates an error occurred during
  1365. reordering.)
  1366. \item The counter \emph{reorderings} of the manager is incremented.
  1367. The counter is initialized to 0 when the manager is started and can
  1368. be accessed by calling
  1369. \emph{Cudd\_ReadReorderings}\eidx{Cudd\_ReadReorderings}. By taking
  1370. two readings of the counter, an application can determine if
  1371. variable reordering has taken place between the first and the second
  1372. reading. The package itself, however, does not make use of the
  1373. counter: It is mentioned here for completeness.
  1374. \end{enumerate}
  1375. The recursive procedure that receives a NULL value from
  1376. \emph{cuddUniqueInter}\eidx{cuddUniqueInter} must free all
  1377. intermediate results that it may have computed before, and return NULL
  1378. in its turn.
  1379. The wrapper\index{reordering!function wrapper} function does not
  1380. decide whether reordering occurred based on the NULL return value,
  1381. because the NULL value may be the result of lack of memory. Instead,
  1382. it checks the \emph{reordered} flag.
  1383. When a recursive procedure calls another recursive procedure that may
  1384. cause reordering, it should bypass the wrapper and call the recursive
  1385. procedure directly. Otherwise, the calling procedure will not know
  1386. whether reordering occurred, and will not be able to restart. This is
  1387. the main reason why most recursive procedures are internal, rather
  1388. than static. (The wrappers, on the other hand, are mostly exported.)
  1389. \subsection{Debugging}
  1390. \index{debugging}\label{sec:debug}
  1391. By defining the symbol DD\_DEBUG\index{DD\_DEBUG} during compilation,
  1392. numerous checks are added to the code. In addition, the procedures
  1393. \emph{Cudd\_DebugCheck}\eidx{Cudd\_DebugCheck},
  1394. \emph{Cudd\_CheckKeys}\eidx{Cudd\_CheckKeys}, and
  1395. \emph{cuddHeapProfile}\eidx{cuddHeapProfile} can be called at any
  1396. point to verify the consistency of the data structure.
  1397. (\emph{cuddHeapProfile} is an internal procedure. It is declared in
  1398. \emph{cuddInt.h}\index{cuddInt.h}.) Procedures
  1399. \emph{Cudd\_DebugCheck} and \emph{Cudd\_CheckKeys} are especially
  1400. useful when CUDD reports that during garbage collection the number of
  1401. nodes actually deleted from the unique table is different from the
  1402. count of dead nodes kept by the manager. The error causing the
  1403. discrepancy may have occurred much earlier than it is discovered. A
  1404. few strategicaly placed calls to the debugging procedures can
  1405. considerably narrow down the search for the source of the problem.
  1406. (For instance, a call to \emph{Cudd\_RecursiveDeref} where one to
  1407. \emph{Cudd\_Deref} was required may be identified in this way.)
  1408. One of the most common problems encountered in debugging code based on
  1409. the CUDD package is a missing call to
  1410. \emph{Cudd\_RecursiveDeref}\eidx{Cudd\_RecursiveDeref}. To help
  1411. identify this type of problems, the package provides a function called
  1412. \emph{Cudd\_CheckZeroRef}\eidx{Cudd\_CheckZeroRef}. This function
  1413. should be called immediately before shutting down the manager.
  1414. \emph{Cudd\_CheckZeroRef} checks that the only nodes left with
  1415. non-zero reference\index{node!reference count} counts are the
  1416. predefined constants, the BDD projection\index{projection functions}
  1417. functions, and nodes whose reference counts are
  1418. saturated\index{node!reference count!saturated}.
  1419. For this function to be effective the application must explicitly
  1420. dispose of all diagrams to which it has pointers before calling it.
  1421. \subsection{Gathering and Interpreting Statistics}
  1422. \index{statistics}\label{sec:stats}
  1423. Function \emph{Cudd\_PrintInfo}\eidx{Cudd\_PrintInfo} can be called to
  1424. print out the values of parameters and statistics for a manager. The
  1425. output of \emph{Cudd\_PrintInfo} is divided in two sections. The
  1426. first reports the values of parameters that are under the application
  1427. control. The second reports the values of statistical counters and
  1428. other non-modifiable parameters. A quick guide to the interpretation
  1429. of all these quantities follows. For ease of exposition, we reverse
  1430. the order and describe the non-modifiable parameters first. We'll use
  1431. a sample run as example. There is nothing special about this run.
  1432. \subsubsection{Non Modifiable Parameters}
  1433. \label{sec:nonModPar}
  1434. The list of non-modifiable parameters starts with:
  1435. \begin{verbatim}
  1436. **** CUDD non-modifiable parameters ****
  1437. Memory in use: 32544220
  1438. \end{verbatim}
  1439. This is the memory used by CUDD for three things mainly: Unique table
  1440. (including all DD nodes in use), node free list, and computed table.
  1441. This number almost never decreases in the lifetime of a CUDD manager,
  1442. because CUDD does not release memory when it frees nodes. Rather, it
  1443. puts the nodes on its own free list. This number is in bytes. It does
  1444. not represent the peak memory occupation, because it does not include
  1445. the size of data structures created temporarily by some functions (e.g.,
  1446. local look-up tables).
  1447. \begin{verbatim}
  1448. Peak number of nodes: 837018
  1449. \end{verbatim}
  1450. This number is the number of nodes that the manager has allocated.
  1451. This is not the largest size of the BDDs, because the manager will
  1452. normally have some dead nodes and some nodes on the free list.
  1453. \begin{verbatim}
  1454. Peak number of live nodes: 836894
  1455. \end{verbatim}
  1456. This is the largest number of live nodes that the manager has held
  1457. since its creation.
  1458. \begin{verbatim}
  1459. Number of BDD variables: 198
  1460. Number of ZDD variables: 0
  1461. \end{verbatim}
  1462. These numbers tell us this run was not using ZDDs.
  1463. \begin{verbatim}
  1464. Number of cache entries: 1048576
  1465. \end{verbatim}
  1466. Current number of slots of the computed table. If one has a
  1467. performance problem, this is one of the numbers to look at. The cache
  1468. size is always a power of 2.
  1469. \begin{verbatim}
  1470. Number of cache look-ups: 2996536
  1471. Number of cache hits: 1187087
  1472. \end{verbatim}
  1473. These numbers give an indication of the hit rate in the computed
  1474. table. It is not unlikely for model checking runs to get
  1475. hit rates even higher than this one (39.62\%).
  1476. \begin{verbatim}
  1477. Number of cache insertions: 1809473
  1478. Number of cache collisions: 961208
  1479. Number of cache deletions: 0
  1480. \end{verbatim}
  1481. A collision\index{cache!collision} occurs when a cache entry is
  1482. overwritten. A deletion\index{cache!deletion}
  1483. occurs when a cache entry is invalidated (e.g., during garbage
  1484. collection). If the number of deletions is high compared to the
  1485. number of collisions, it means that garbage collection occurs too
  1486. often. In this case there were no garbage collections; hence, no
  1487. deletions.
  1488. \begin{verbatim}
  1489. Cache used slots = 80.90% (expected 82.19%)
  1490. \end{verbatim}
  1491. Percentage of cache slots that contain a valid entry. If this
  1492. number is small, it may signal one of three conditions:
  1493. \begin{enumerate}
  1494. \item The cache may have been recently resized and it is still filling
  1495. up.
  1496. \item The cache is too large for the BDDs. This should not happen if
  1497. the size of the cache is determined by CUDD.
  1498. \item The hash function is not working properly. This is accompanied
  1499. by a degradation in performance. Conversely, a degradation in
  1500. performance may be due to bad hash function behavior.
  1501. \end{enumerate}
  1502. The expected value is computed assuming a uniformly random
  1503. distribution of the accesses. If the difference between the measured
  1504. value and the expected value is large (unlike this case), the cache is
  1505. not working properly.
  1506. \begin{verbatim}
  1507. Soft limit for cache size: 1318912
  1508. \end{verbatim}
  1509. This number says how large the cache can grow. This limit is based on
  1510. the size of the unique table. CUDD uses a reward-based policy for
  1511. growing the cache. (See Section~\ref{sec:cache-sizing}.) The default
  1512. hit rate for resizing is 30\% and the value in effect is reported
  1513. among the modifiable parameters.
  1514. \begin{verbatim}
  1515. Number of buckets in unique table: 329728
  1516. \end{verbatim}
  1517. This number is exactly one quarter of the one above. This is indeed
  1518. how the soft limit is determined currently, unless the computed table
  1519. hits the specified hard limit. (See below.)
  1520. \begin{verbatim}
  1521. Used buckets in unique table: 87.96% (expected 87.93%)
  1522. \end{verbatim}
  1523. Percentage of unique table buckets that contain at least one
  1524. node. Remarks analogous to those made about the used cache slots apply.
  1525. \begin{verbatim}
  1526. Number of BDD and ADD nodes: 836894
  1527. Number of ZDD nodes: 0
  1528. \end{verbatim}
  1529. How many nodes are currently in the unique table, either alive or dead.
  1530. \begin{verbatim}
  1531. Number of dead BDD and ADD nodes: 0
  1532. Number of dead ZDD nodes: 0
  1533. \end{verbatim}
  1534. Subtract these numbers from those above to get the number of live
  1535. nodes. In this case there are no dead nodes because the application
  1536. uses delayed dereferencing
  1537. \emph{Cudd\_DelayedDerefBdd}\eidx{Cudd\_DelayedDerefBdd}.
  1538. \begin{verbatim}
  1539. Total number of nodes allocated: 836894
  1540. \end{verbatim}
  1541. This is the total number of nodes that were requested and obtained
  1542. from the free list. It never decreases, and is not an indication of
  1543. memory occupation after the first garbage collection. Rather, it is a
  1544. measure of the package activity.
  1545. \begin{verbatim}
  1546. Total number of nodes reclaimed: 0
  1547. \end{verbatim}
  1548. These are the nodes that were resuscitated from the dead. If they are
  1549. many more than the allocated nodes, and the total
  1550. number of slots is low relative to the number of nodes, then one may
  1551. want to increase the limit for fast unique table growth. In this case,
  1552. the number is 0 because of delayed dereferencing.
  1553. \begin{verbatim}
  1554. Garbage collections so far: 0
  1555. Time for garbage collections: 0.00 sec
  1556. Reorderings so far: 0
  1557. Time for reordering: 0.00 sec
  1558. \end{verbatim}
  1559. There is a GC for each reordering. Hence the first count will always be
  1560. at least as large as the second.
  1561. \begin{verbatim}
  1562. Node swaps in reordering: 0
  1563. \end{verbatim}
  1564. This is the number of elementary reordering steps. Each step consists
  1565. of the re-expression of one node while swapping two adjacent
  1566. variables. This number is a good measure of the amount of work done in
  1567. reordering.
  1568. \subsubsection{Modifiable Parameters}
  1569. \label{sec:modPar}
  1570. Let us now consider the modifiable parameters, that is, those settings on
  1571. which the application or the user has control.
  1572. \begin{verbatim}
  1573. **** CUDD modifiable parameters ****
  1574. Hard limit for cache size: 8388608
  1575. \end{verbatim}
  1576. This number counts entries. Each entry is 16 bytes if CUDD is compiled
  1577. to use 32-bit pointers. Two important observations are in order:
  1578. \begin{enumerate}
  1579. \item If the datasize limit is set, CUDD will use it to determine this
  1580. number automatically. On a Unix system, one can type ``limit'' or
  1581. ``ulimit'' to verify if this value is set. If the datasize limit is
  1582. not set, CUDD uses a default which is rather small. If you have
  1583. enough memory (say 64MB or more) you should seriously consider
  1584. \emph{not} using the default. So, either set the datasize limit, or
  1585. override the default with
  1586. \emph{Cudd\_SetMaxCacheHard}\eidx{Cudd\_SetMaxCacheHard}.
  1587. \item If a process seems to be going nowhere, a small value for
  1588. this parameter may be the culprit. One cannot overemphasize the
  1589. importance of the computed table in BDD algorithms.
  1590. \end{enumerate}
  1591. In this case the limit was automatically set for a target maximum
  1592. memory occupation of 104 MB.
  1593. \begin{verbatim}
  1594. Cache hit threshold for resizing: 15%
  1595. \end{verbatim}
  1596. This number can be changed if one suspects performance is hindered by
  1597. the small size of the cache, and the cache is not growing towards the
  1598. soft limit sufficiently fast. In such a case one can change the
  1599. default 30\% to 15\% (as in this case) or even 1\%.
  1600. \begin{verbatim}
  1601. Garbage collection enabled: yes
  1602. \end{verbatim}
  1603. One can disable it, but there are few good reasons for doing
  1604. so. It is normally preferable to raise the limit for fast unique table
  1605. growth. (See below.)
  1606. \begin{verbatim}
  1607. Limit for fast unique table growth: 1363148
  1608. \end{verbatim}
  1609. See Section~\ref{sec:unique} and the comments above about reclaimed
  1610. nodes and hard limit for the cache size. This value was chosen
  1611. automatically by CUDD for a datasize limit of 1 GB.
  1612. \begin{verbatim}
  1613. Maximum number of variables sifted per reordering: 1000
  1614. Maximum number of variable swaps per reordering: 2000000
  1615. Maximum growth while sifting a variable: 1.2
  1616. \end{verbatim}
  1617. Lowering these numbers will cause reordering to be less accurate and
  1618. faster. Results are somewhat unpredictable, because larger BDDs after one
  1619. reordering do not necessarily mean the process will go faster or slower.
  1620. \begin{verbatim}
  1621. Dynamic reordering of BDDs enabled: yes
  1622. Default BDD reordering method: 4
  1623. Dynamic reordering of ZDDs enabled: no
  1624. Default ZDD reordering method: 4
  1625. \end{verbatim}
  1626. These lines tell whether automatic reordering can take place and what
  1627. method would be used. The mapping from numbers to methods is in
  1628. \texttt{cudd.h}. One may want to try different BDD reordering
  1629. methods. If variable groups are used, however, one should not expect
  1630. to see big differences, because CUDD uses the reported method only to
  1631. reorder each leaf variable group (typically corresponding present and
  1632. next state variables). For the relative order of the groups, it
  1633. always uses the same algorithm, which is effectively sifting.
  1634. As for enabling dynamic reordering or not, a sensible recommendation is the
  1635. following: Unless the circuit is rather small or one has a pretty good
  1636. idea of what the order should be, reordering should be enabled.
  1637. \begin{verbatim}
  1638. Realignment of ZDDs to BDDs enabled: no
  1639. Realignment of BDDs to ZDDs enabled: no
  1640. Dead nodes counted in triggering reordering: no
  1641. Group checking criterion: 7
  1642. Recombination threshold: 0
  1643. Symmetry violation threshold: 0
  1644. Arc violation threshold: 0
  1645. GA population size: 0
  1646. Number of crossovers for GA: 0
  1647. \end{verbatim}
  1648. Parameters for reordering. See the documentation of the functions used
  1649. to control these parameters for the details.
  1650. \begin{verbatim}
  1651. Next reordering threshold: 100000
  1652. \end{verbatim}
  1653. When the number of nodes crosses this threshold, reordering will be
  1654. triggered. (If enabled; in this case it is not.) This parameter is
  1655. updated by the package whenever reordering takes place. The
  1656. application can change it, for instance at start-up. Another
  1657. possibility is to use a hook function (see Section~\ref{sec:hooks}) to
  1658. override the default updating policy.
  1659. \subsubsection{Extended Statistics and Reporting}
  1660. \label{sec:extendedStats}
  1661. The following symbols can be defined during compilation to increase
  1662. the amount of statistics gathered and the number of messages produced
  1663. by the package:
  1664. \begin{itemize}
  1665. \item DD\_STATS\index{DD\_STATS};
  1666. \item DD\_CACHE\_PROFILE\index{DD\_CACHE\_PROFILE};
  1667. \item DD\_UNIQUE\_PROFILE\index{DD\_UNIQUE\_PROFILE}.
  1668. \item DD\_VERBOSE\index{DD\_VERBOSE};
  1669. \end{itemize}
  1670. Defining DD\_CACHE\_PROFILE causes each entry of the cache to include
  1671. an access counter, which is used to compute simple statistics on the
  1672. distribution of the keys.
  1673. \subsection{Guidelines for Documentation}
  1674. \label{sec:doc}\index{documentation}
  1675. The documentation of the CUDD functions is extracted automatically
  1676. from the sources by \texttt{doxygen}\index{doxygen}
  1677. \href{http://www.doxygen.org}{\texttt{www.doxygen.org}}.)
  1678. The following guidelines are adhered to in CUDD to insure consistent and
  1679. effective use of automatic extraction. It is recommended that
  1680. extensions to CUDD follow the same documentation guidelines.
  1681. \begin{itemize}
  1682. \item The documentation of an exported procedure should be sufficient
  1683. to allow one to use it without reading the code. It is not necessary
  1684. to explain how the procedure works; only what it does.
  1685. \item The \emph{see}\index{documentation!SeeAlso@\emph{see}}
  1686. fields should be space-separated lists of function names. The
  1687. \emph{see} field of an exported procedure should only reference
  1688. other exported procedures. The \emph{see} field of an internal
  1689. procedure may reference other internal procedures as well as
  1690. exported procedures, but no static procedures.
  1691. \item The return values are detailed in the
  1692. \emph{return}\index{documentation!Description@\emph{return}}
  1693. field, not in the
  1694. \emph{brief}\index{documentation!Synopsis@\emph{brief}} field.
  1695. \item The parameters are documented alongside their declarations.
  1696. Further comments may appear in the \emph{details} field.
  1697. \item The \emph{brief} field should be about one line long.
  1698. \end{itemize}
  1699. %----------------------------------------
  1700. \section{The C++ Interface}
  1701. \label{sec:cpp}
  1702. \subsection{Compiling and Linking}
  1703. \label{sec:compileCpp}
  1704. To build an application that uses the CUDD C++ interface, you should
  1705. add
  1706. \begin{verbatim}
  1707. #include "cuddObj.hh"
  1708. \end{verbatim}
  1709. to your source files. In addition to the normal CUDD libraries (see
  1710. Section~\ref{sec:compileExt}) you should link
  1711. \verb|libobj.a|\index{libraries!obj} to your executable. Refer to the
  1712. installation notes in the top level directory of the distribution for
  1713. further details.
  1714. \subsection{Basic Manipulation}
  1715. \label{sec:basicCpp}
  1716. The following fragment of code illustrates some simple operations on
  1717. BDDs using the C++ interface.
  1718. \begin{verbatim}
  1719. Cudd mgr(0,0);
  1720. BDD x = mgr.bddVar();
  1721. BDD y = mgr.bddVar();
  1722. BDD f = x * y;
  1723. BDD g = y + !x;
  1724. cout << "f is" << (f <= g ? "" : " not")
  1725. << " less than or equal to g\n";
  1726. \end{verbatim}
  1727. This code creates a manager called \verb|mgr| and two variables in it.
  1728. It then defines two functions \verb|f| and \verb|g| in terms of the
  1729. variables. Finally, it prints a message based on the comparison of the
  1730. two functions. No explicit referencing or dereferencing is required.
  1731. The operators are overloaded in the intuitive way. BDDs are freed when
  1732. execution leaves the scope in which they are defined or when the
  1733. variables referring to them are overwritten.
  1734. %----------------------------------------
  1735. \section{Acknowledgments}
  1736. \label{sec:ack}
  1737. The contributors: Iris Bahar, Hyunwoo Cho, Erica Frohm, Charlie Gaona,
  1738. Cheng Hua, Jae-Young Jang, Seh-Woong Jeong, Balakrishna Kumthekar,
  1739. Enrico Macii, Bobbie Manne, In-Ho Moon, Curt Musfeldt, Shipra Panda,
  1740. Abelardo Pardo, Bernard Plessier, Kavita Ravi, Hyongkyoon Shin, Alan
  1741. Shuler, Arun Sivakumaran, Jorgen Sivesind.
  1742. \noindent
  1743. The early adopters: Gianpiero Cabodi, Jordi Cortadella, Mario Escobar,
  1744. Gayani Gamage, Gary Hachtel, Mariano Hermida, Woohyuk Lee, Enric
  1745. Pastor, Massimo Poncino, Ellen Sentovich, the students of ECEN5139.
  1746. I am also particularly indebted to the following people for in-depth
  1747. discussions on BDDs: Armin Biere, Olivier Coudert, Hubert Garavel,
  1748. Arie Gurfinkel, Geert Janssen, Don Knuth, David Long, Jean Christophe
  1749. Madre, Ken McMillan, Shin-Ichi Minato, Jaehong Park, Rajeev Ranjan,
  1750. Rick Rudell, Ellen Sentovich, Tom Shiple, Christian Stangier, and
  1751. Bwolen Yang.
  1752. Special thanks to Norris Ip for guiding my faltering steps
  1753. in the design of the C++ interface.
  1754. Gianpiero Cabodi and Stefano Quer have graciously agreed to let me
  1755. distribute their dddmp library with CUDD.
  1756. Masahiro Fujita, Gary Hachtel, and Carl Pixley have provided
  1757. encouragement and advice.
  1758. The National Science Foundation and the Semiconductor Research
  1759. Corporation have supported in part the development of this package.
  1760. \phantomsection
  1761. \addcontentsline{toc}{section}{\numberline{}References}
  1762. %\bibliography{comb-synthesis,references,ref2}
  1763. \begin{thebibliography}{10}
  1764. \bibitem{Bahar93}
  1765. R.~I. Bahar, E.~A. Frohm, C.~M. Gaona, G.~D. Hachtel, E.~Macii, A.~Pardo, and
  1766. F.~Somenzi.
  1767. \newblock Algebraic decision diagrams and their applications.
  1768. \newblock In {\em Proceedings of the International Conference on Computer-Aided
  1769. Design}, pages 188--191, Santa Clara, CA, November 1993.
  1770. \bibitem{Bollig95}
  1771. B.~Bollig, M.~L\"obbing, and I.~Wegener.
  1772. \newblock Simulated annealing to improve variable orderings for {OBDDs}.
  1773. \newblock Presented at the International Workshop on Logic Synthesis,
  1774. Granlibakken, CA, May 1995.
  1775. \bibitem{BBR}
  1776. K.~S. Brace, R.~L. Rudell, and R.~E. Bryant.
  1777. \newblock Efficient implementation of a {BDD} package.
  1778. \newblock In {\em Proceedings of the 27th Design Automation Conference}, pages
  1779. 40--45, Orlando, FL, June 1990.
  1780. \bibitem{BDD}
  1781. R.~E. Bryant.
  1782. \newblock Graph-based algorithms for {Boolean} function manipulation.
  1783. \newblock {\em IEEE Transactions on Computers}, C-35(8):677--691, August 1986.
  1784. \bibitem{Drechs95}
  1785. R.~Drechsler, B.~Becker, and N.~G\"ockel.
  1786. \newblock A genetic algorithm for variable ordering of {OBDDs}.
  1787. \newblock Presented at the International Workshop on Logic Synthesis,
  1788. Granlibakken, CA, May 1995.
  1789. \bibitem{Friedman90}
  1790. S.~J. Friedman and K.~J. Supowit.
  1791. \newblock Finding the optimal variable ordering for binary decision diagrams.
  1792. \newblock {\em IEEE Transactions on Computers}, 39(5):710--713, May 1990.
  1793. \bibitem{Fujita91b}
  1794. M.~Fujita, Y.~Matsunaga, and T.~Kakuda.
  1795. \newblock On variable ordering of binary decision diagrams for the application
  1796. of multi-level logic synthesis.
  1797. \newblock In {\em Proceedings of the European Conference on Design Automation},
  1798. pages 50--54, Amsterdam, February 1991.
  1799. \bibitem{Held62}
  1800. M.~Held and R.~M. Karp.
  1801. \newblock A dynamic programming approach to sequencing problems.
  1802. \newblock {\em J. SIAM}, 10(1):196--210, 1962.
  1803. \bibitem{Ishiur91}
  1804. N.~Ishiura, H.~Sawada, and S.~Yajima.
  1805. \newblock Minimization of binary decision diagrams based on exchanges of
  1806. variables.
  1807. \newblock In {\em Proceedings of the International Conference on Computer-Aided
  1808. Design}, pages 472--475, Santa Clara, CA, November 1991.
  1809. \bibitem{Jeong93}
  1810. S.-W. Jeong, T.-S. Kim, and F.~Somenzi.
  1811. \newblock An efficient method for optimal {BDD} ordering computation.
  1812. \newblock In {\em International Conference on VLSI and CAD (ICVC'93)}, Taejon,
  1813. Korea, November 1993.
  1814. \bibitem{Minato93}
  1815. S.-I. Minato.
  1816. \newblock Zero-suppressed {BDD}s for set manipulation in combinatorial
  1817. problems.
  1818. \newblock In {\em Proceedings of the Design Automation Conference}, pages
  1819. 272--277, Dallas, TX, June 1993.
  1820. \bibitem{Panda95b}
  1821. S.~Panda and F.~Somenzi.
  1822. \newblock Who are the variables in your neighborhood.
  1823. \newblock In {\em Proceedings of the International Conference on Computer-Aided
  1824. Design}, pages 74--77, San Jose, CA, November 1995.
  1825. \bibitem{Panda94}
  1826. S.~Panda, F.~Somenzi, and B.~F. Plessier.
  1827. \newblock Symmetry detection and dynamic variable ordering of decision
  1828. diagrams.
  1829. \newblock In {\em Proceedings of the International Conference on Computer-Aided
  1830. Design}, pages 628--631, San Jose, CA, November 1994.
  1831. \bibitem{Plessi93}
  1832. B.~F. Plessier.
  1833. \newblock {\em A General Framework for Verification of Sequential Circuits}.
  1834. \newblock PhD thesis, University of Colorado at Boulder, Dept.\ of Electrical
  1835. and Computer Engineering, 1993.
  1836. \bibitem{Rudell93}
  1837. R.~Rudell.
  1838. \newblock Dynamic variable ordering for ordered binary decision diagrams.
  1839. \newblock In {\em Proceedings of the International Conference on Computer-Aided
  1840. Design}, pages 42--47, Santa Clara, CA, November 1993.
  1841. \end{thebibliography}
  1842. % Index cross-references.
  1843. \index{Algebraic Decision Diagram|see{ADD}}
  1844. \index{Binary Decision Diagram|see{BDD}}
  1845. \index{Zero-suppressed Binary Decision Diagram|see{ZDD}}
  1846. \index{dot|see{graph, drawing}}
  1847. \index{extdoc|see{documentation}}
  1848. \index{node!terminal|see{node, constant}}
  1849. \phantomsection
  1850. \addcontentsline{toc}{section}{\numberline{}Index}
  1851. \printindex
  1852. \end{document}
  1853. %%% Local Variables:
  1854. %%% mode: latex
  1855. %%% TeX-master: t
  1856. %%% TeX-PDF-mode: t
  1857. %%% End: