A production system is working in the GOOD state and there is a constant probability that it falls into the BAD state (and remains there) at a disorder moment. A decision-maker observes the output $X_t$ of the system at each time $t = 1, 2, \ldots, n,$ and decide either CONTINUE ( {\it i.e.} reject $X_t$ and observe $X_{t+1}$), or STOP ( {\it i.e.} accept and receive $X_t$). The objective is to maximize the expected net value of the $X_{\tau}$ at the stopping time $\tau$ he decides during the given finite period of time $n$. Recall is not allowed ( {\it i.e.} the observation once rejected cannot be recalled later.) For uniformly distributed observations we derive the Optimality Equation and show that the optimal policy is not necessarily of a control-limit type. Also an example in which the optimal policy is of a control-limit type is shown. An analytical solution for the infinite-horizon version by introducing discount rate over time, where a functional equation must be solved, is as yet unknown.