6.6   Intractability


This section under construction. The goal of complexity theory is to understand the nature of efficient computation. We have learned about analysis of algorithms, which enables us to classify algorithms according to the amount of resources they will consume. In this section, we will learn about a rich class of problems for which nobody has been able to devise an efficient algorithm.

Computational complexity.

As digital computers were developed in the 1940s and 1950s, the Turing machine served as the theoretical model of computation. In the 1960s Hartmanis and Stearns proposed measuring the time and memory needed by a computer as a function of the input size. They defined complexity classes in terms of Turing machines and proved that some problems have "an inherent complexity that cannot be circumvented by clever programming." They also proved a formal version (time hierarchy theorem) of the intuitive idea that if given more time or space, Turing machines can compute more things. In other words, no matter how hard a problem is (time and space requirements), there are always harder problems.

Computational complexity is the art and science of determining resource requirements for different problems. Computational complexity deals with assertions about any conceivable algorithm for a problem. Making such statements is significantly more challenging than understanding the running time of one particular algorithm for the problem since we must reason about all possible algorithms (even those not yet discovered). This makes computational complexity an exciting, yet daunting, field of study. We will survey some of its most important ideas and practical outgrowths.

Polynomial time.

We have analyzed the running time of an algorithm as a function of its input size. When solving a given problem, we prefer an algorithm that takes 8 N log N steps to one that takes 3 N2 steps, since when N is large, the first algorithm is significantly faster than the first. The second algorithm will ultimately solve the same problem (but it might take hours instead of seconds). In contrast, an exponential time algorithm has a different qualitative behavior. For example, a brute force algorithm for the TSP might take N! steps. Even if each electron in the universe (1079) had the power of today's fastest supercomputer (1012 instructions per second), and each worked for the life of the universe (1017 seconds) on solving the problem, it would barely make a dent in solving a problem with N = 1,000 since 1000! >> 101000 >> 1079 * 1012 * 1017. Exponential growth dwarfs technological change. We refer to any algorithm whose running time is bounded by a polynomial in the input size (e.g., N log N or N^2) as a polynomial-time algorithm. We say that a problem is intractable if there is no polynomial-time algorithm for the problem.

Create log-log scale plot of N, N3, N5, N10, 1.1N, 2N, N! as in Harel p. 74.

As programmers gained more experience with computation, it became evident that polynomial-time algorithms were useful and exponential-time algorithms were not. In a very influential paper, Jack Edmonds referred to polynomial algorithms as "good algorithms" and argued that polynomial time is a good surrogate for efficient computation. Kurt Godel wrote a letter to von Neumann (p. 9) in 1956 that contains the (implicit) notion that polynomiality is a desirable feature. Earlier (1953), von Neumann recognized the qualitative difference between polynomial and exponential algorithms. The idea of classifying problems according to polynomial and exponential time profoundly changed the way people thought about computational problems.

NP.

Informally we define a search problem as a computational problem where we are looking for a solution among a (potentially huge) number of possibilities, but such that when we find a solution, we can easily check that it solves our problem. Given an instance I of a search problem (some input data specifying the problem), our goal is to find a solution S (an entity that meets some pre-specified criterion) or report that no such solution exists. To be a search problem, we require that it be easy to check that S is indeed a solution. By easy, we mean polynomial-time in the size of the input I. The complexity class NP is the set of all search problems. Here are a few examples. While it is easy to check a proposed solution to all three problems, how difficult is it to find a solution from scratch?

Remark: our definition of NP is slightly non-standard. Historically, complexity classes were defined in terms of decision problems (yes-no problems). For example, given a matrix A and a vector b, does there exist a solution x such that Ax = b?

P.

The complexity class P is the set of all search problems solvable in polynomial-time (on a deterministic Turing machine). As before, we define P in terms of search problems (instead of decision problems). It captures most of the problems that we can solve in practice on real machines. We list a few examples below:


Problem Description Algorithm Instance Solution
GCD Find the greatest common divisor of two integers x and y. Euclid's algorithm
(Euclid, 300 BCE)
34, 51 17
STCONN Given a graph G and two vertices s and t, find a path from s to t. BFS or DFS
(Theseus)
SORT Find permutation that puts elements in ascending order. Mergesort
(von Neumann, 1945)
2.3 8.5 1.2
9.1 2.2 0.3
5 2 4 0 1 3
PLANARITY Given a graph G, draw it in the plane so that no two edges cross. (Hopcroft-Tarjan, 1974)
LSOLVE Given a matrix A and a vector b, find a vector x such Ax = b. Gaussian elimination
(Edmonds, 1967)
x+y=1
2x+4y=3
x = 1/2
y = 1/2
LP Given a matrix A and a vector b, find a vector x such that Ax ≤ b? Ellipsoid algorithm
(Khachiyan, 1979)
x+y≤1
2x+4y≤3
x = 0
y = 0
DIOPHANTINE Given a (sparse) polynomial of one variable with integer coefficients, find an integral root? (Smale et. al, 1999) x5 - 32 x = 2


Extended Church-Turing Thesis.

In the mid 1960s Cobham and Edmonds independently observed that the set of problems solvable in a polynomial number of steps remains invariant over a very wide range of computational models, from deterministic Turing machines to RAM machines. The extended Church-Turing thesis asserts that the Turing machine is as efficient as any physical computing device. That is, P is the set of search problems solvable in polynomial-time in this universe. If some piece of hardware solves a problem of size N in time T(N), the extended Church-Turing thesis asserts that a deterministic Turing machine can do it in time T(N)k for some fixed constant k, where k depends on the particular problem. Andy Yao expresses the broad implications of this thesis:
They imply that at least in principle, to make future computers more efficient, one only needs to focus on improving the implementation technology of present-day computer designs.

In other words, any reasonable model of computation can be efficiently simulated on a (probabilistic) Turing machine. The extended Church-Turing thesis is true for all known physical general purpose computers. For random access machines (e.g., your PC or Mac) the constant k = 2. So, for example, if a random access machine can perform a computation in time N3/2, then a Turing machine can do the same computation in time N3.

Does P = NP?

One of the most profound scientific questions of our time is whether P = NP. That is, can all search problems be solved in polynomial time? Clay Foundation offers a 1 million dollar millennium prize for solving it. Here are some speculations on when the question will be resolved. The overwhelming consensus is that P != NP, but nobody has been able to prove it.

Video of Homer Simpson pontificating over P = NP, with accompanying music Erased by Paradise Lost.

Godel's letter to von Neumann anticipated the P = NP question. He recognized that if P = NP (satisfiability is in P), it "would have consequences of the greatest importance" since then "the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine." He asked for which combinatorial problems was there a more efficient alternative to exhaustive search.

NP-completeness.

Informally, NP-complete problems are the "hardest" problems in NP; they are the ones most likely to not be in P. Define: a problem is NP-complete if (i) it is in NP and (ii) every problem in NP polynomial reduces to it. Defining the concept of NP-completeness does not mean that such problems exist. In fact, the existence of NP-complete problems is an amazing thing. We cannot prove a problem is NP-complete by presenting a reduction from each NP problem since there are infinitely many of them. In the 1960s, Cook and Levin proved that SAT is NP-complete.

This is an example of universality: if we can solve any NP-complete, then we can solve any problem in NP. Unique scientific discovery giving common explanation to all sorts of problems. It is even more amazing that there exist "natural" problems that are NP-complete.

The impact of NP-completeness on the natural sciences has been undeniable. One the first NP-complete problems were discovered, intractability "spread like a shockwave through the space of problems", first in computer science, and then to other scientific disciplines. Papadimitriou lists 20 diverse scientific disciplines that were coping with internal questions. Ultimately, scientists discovered their inherent complexity after realizing that their core problems were NP-complete. NP-completeness is mentioned as a keyword in 6,000 scientific papers per year. "Captures vast domains of computational, scientific, mathematical endeavors, and seems to roughly delimit what mathematicians and scientists had been aspiring to compute feasibly." [Papadimitriou] Few scientific theories have had such a breadth and depth of influence.

Some NP-complete problems. Since the discovery that SAT is NP-complete, tens of thousands of problems have been identified as NP-complete. In 1972, Karp showed that 21 of the most infamous problem s in discrete mathematics were NP-complete, including Tsp, Knapsack, 3Color, and Clique. The failure of scientists to find an efficient algorithm for these 21 problems, despite being unaware that they were NP-complete, was among the first evidence suggesting that P != NP. Below we list a sampling of some NP-complete problems. Here are some more NP-complete problems. This is only meant to illustrate their diversity and pervasiveness.

Coping with intractability.

The theory of NP-completeness says that unless P = NP, there are some important problems for which we can't create an algorithm that simultaneously achieves the following three properties:

When we encounter an NP-complete problem, we must relax one of the three requirements. We will consider solutions to the TSP problem that relax one of the three goals.

Complexity theory deals with worst-case behavior. This leaves open the possibility of designing algorithms that run quickly on some instances, but take a prohibitive amount of time on others. For example, Chaff is a program that can solve many real-world SAT instances with 10,000 variables. Remarkably, it was developed by two undergraduates at Princeton. The algorithm does not guarantee to run in polynomial time, but the instances we're interested in may be "easy."

Sometimes we may be willing to sacrifice the guarantee on finding the optimal solution. Many heuristic techniques (simulating annealing, genetic algorithms, Metropolis algorithm) have been designed to find "nearly optimal" solutions to the TSP problem. Sometimes it is even possible to prove how good the resulting solution will be. For example, Sanjeev Arora designed an approximation algorithm for the Euclidean TSP problem that guarantees to find a solution that costs at most, say 1%, above the optimum. Designing approximation algorithms is an active area of research. Unfortunately, there are also non-approximability results of the form: if you can find an approximation algorithm for problem X that guarantees to get within a factor of 2 of the optimum, then P = NP. Thus, designing approximation algorithms for some NP-complete problems is not possible.

If we are trying to solve a special class of TSP problems, e.g., where the points lie on the boundary of a circle or the vertices of an M-by-N lattice, then we can design efficient (and trivial) algorithms to solve the problem.

Exploiting intractability. Having intractability problems is occasionally a good thing. In Section XYZ, we will exploit intractable problems to design cryptographic systems.

Between P and NP-complete. Most natural problems in NP are now known to be in P or NP-complete. If P != NP, then there are provably some NP problems that are neither in P or NP-complete. Like "dark matter we have not developed means of observing." A few notable unclassified problems in the netherworld: factoring, and subgraph isomorphism.

Other complexity classes.

The complexity classes P, NP, and NP-complete are the three most famous complexity classes. Scott Aaronson's website The Complexity Zoo contains a comprehensive list of other complexity classes that are useful in classifying problems according to their computational resources (time, space, parallelizability, use of randomness, quantum computing). We describe a few of the most important ones below.

Other types of computational problems.

We focus on search problems since this is a very rich and important class of problems for scientists and engineers.

Output polynomial time.

Some problems involve more output than a single bit of information. For example, outputting a solution to the Towers of Hanoi problem requires at least 2^N steps. This requirement is not because the solution is inherently hard to compute, but rather because there are 2^N symbols of output, and it takes one unit of time to write each output symbol. Perhaps a more natural way to measure efficiency is a function both of the input size and of the output size. A classic electrical engineering problem with DFAs is to build a DFA from a RE that uses the minimum number of states. We would like an algorithm that is polynomial in the size of the input RE (number of symbols) and also in the size of the output DFA (number of states). Unless P = NP, designing such an algorithm is impossible. In fact, it's not even possible to design a polynomial algorithm that gets the answer within a constant (or even polynomial) number of states! Without the theory of NP-completeness, researchers would waste time following unpromising research directions.

Other lower bounds.

Brute force TSP takes N! steps. Using dynamic programming, can get it down to 2^N. Best lower bound = N. Essence of computational complexity = trying to find matching upper and lower bounds.

Circuit complexity.

There are other ways to define and measure computational complexity. A Boolean circuit of n inputs can compute any Boolean function of n variables. We can associate the set of binary strings of size n for which the circuit outputs 1 as the set of strings in the language. We need one circuit for each input size n. Shannon (1949) proposed the size of the circuit as a measure of complexity. It is known that a language has uniformly polynomial circuits if and only if the language is in P.

Physical and analog computation.

The P = NP question is a mathematical question regarding the capabilities of Turing machines and classical digital computers. We might also wonder whether the same is true for analog computers. By analog, we mean any "deterministic physical device that uses a fixed number of physical variables to represent each problem variable." Internal state represented by continuous variables instead of discrete. E.g., soap bubbles, protein folding, quantum computing, gears, time travel, black holes, etc.

Vergis, Steiglitz, and Dickinson proposed an analog form of the Strong Church-Turing thesis:

Any finite analog computer can be simulated efficiently by a digital computer, in the sense that the time required by the digital computer to simulate the analog computer is bounded by a polynomial function of the resources used by the analog computer.
The resources of the analog computer could be time, volume, mass, energy, torque, or angular momentum. Reference: The Physics of Analog Computation

Any reasonable model of computation (e.g., not involving exponential parallelism) can be simulated in polynomial time by a Turing machine (supplemented by a hardware random number generator).

Reference: Scott Aaronson. Can yield new insights into physics. One day "the presumed intractability of NP-complete problems might be taken as a useful constraint in the search for new physical theories" just like the second law of thermodynamics. Still can be falsified by experiment, but don't waste time looking...

PRIMES and COMPOSITE.

It is easy to convince someone that a number is composite by producing a factor. Then, the person just has to check (by long division) that you did not lie to them. Marin Mersenne conjectured that numbers of the form 2p - 1 are prime for p = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 and 257. His conjecture for p = 67 was disproved by F. N. Cole over two hundred and fifty years later in 1903. According to E. T. Bell's book Mathematics: Queen and Servant of Science
In the October meeting of the AMS, Cole announced a talk "On the Factorisation of Large Numbers". He walked up to the blackboard without saying a word, calculated by hand the value of 267, carefully subtracted 1. Then he multiplied two numbers (which were 193707721 and 761838257287). Both results written on the blackboard were equal. Cole silently walked back to his seat, and this is said to be the first and only talk held during an AMS meeting where the audience applauded. There were no questions. It took Cole about 3 years, each Sunday, to find this factorization, according to what he said.

For the record 267 - 1 = 193707721 × 761838257287 = 147573952589676412927.

Q + A

Q. Are polynomial algorithms always useful?

A. No, algorithms that take N100 or 10100 N2 steps are as useless in practice as exponential ones. The constants that arise in practice are usually sufficiently small that polynomial-time algorithms scale to huge problems, so polynomiality often serves as a surrogate for useful in practice.

Q. Why is the class of all search problems named NP?

A. The original definition of NP was in terms of nondeterministic Turing machines: NP is the set of all decision problems that can be solved in polynomial-time on a nondeterministic Turing machine. Roughly speaking, the difference between a deterministic and nondeterministic Turing machine is that the former operates like a conventional computer, performing each instruction in sequence, forming a computational path; a nondeterministic Turing machine can "branch off" where each branch can execute a different statement in parallel, forming a computational tree (If any path in the tree leads to a YES, then we accept; if all paths lead to NO, we reject.) This is where the N in NP comes from. It turns out the two definitions are equivalent, but the certificate one is now more widely used. (Also, Karp's 1972 paper uses the polynomial-time verifiability definition.)

Q. What is the complexity class NP-hard?

A. Several competing definitions. We define a problem (decision, search, or optimization) problem to be NP-hard if solving it in polynomial time would imply P = NP. Definition implicitly uses Turing reduction (extended to search problems).

Q. What's so hard about factoring an integer N in polynomial time - can't I just divide all potential factors less than N (or √N) into x and see if any have a remainder of zero?

A. The algorithm is correct, but remember it takes only lg N bits to represent the integer N. Thus, for an algorithm to be polynomial in the input size, it must be polynomial in lg N, and not N.

Q. How is it possible that checking whether an integer is composite is solvable in polynomial time, yet finding its factors is not known (or believed) to be?

A. There are ways to prove a number is composite without getting your hands on any of its factors. A famous theorem from number theory (Fermat's little theorem) implies that if you have two integers a and p such that (i) a is not a multiple of p and (ii) ap-1 != 1 (mod p), then p is not prime.

Q. Is there a decision problem that is polynomial solvable on a quantum computers, but provably not in P?

A. This is an open research problem. FACTOR is a candidate, but there is no proof that FACTOR is not in P, although this is widely believed to be outside P.

Q. Does NP = EXPTIME?

A. The experts believe no, but have been unable to prove it.

Q. Suppose someone proves P = NP. What would be the practical consequences?

A. It depends on how the question is resolved. Obviously, it would be a remarkable theoretical breakthrough. In practice, it might have dramatic significance if the proof of P = NP established a fast algorithm for an important NP-complete problem. If the proof results in an 2^100 N^117 algorithm for the TSP (and the constant and exponent could not be reduced), it would not have little practical impact. It could also be that someone proves P = NP by indirect means, thereby yielding no algorithm at all!

Q. Suppose someone proves P != NP. What would be the practical consequences?

A. It would be a remarkable theoretical breakthrough and solidify the foundation of much of computational complexity.

Q. Suppose P = NP. Does that mean deterministic TM are the same as non-deterministic TM?

A. Not quite. For example, even if P = NP, a non-deterministic TM may be able to solve a problem in time proportional to N^2, where the best deterministic one would take N^3. If P = NP, it just means that the two types of machines solve the same set of decision problems in polynomial time, but it says nothing about the degree of the polynomial.

Q. Where can I learn more about NP-completeness?

A. The authoritative reference remains Garey and Johnson Computers and Intractability: A Guide to the Theory of NP-completeness. Many of the most important subsequent discoveries are documented in David Johnson's NP-completeness column.

Exercises

  1. Suppose that X is NP-complete, X poly-time reduces to Y, and Y poly-time reduces to X. Is Y necessarily NP-complete?

    Answer: No, since Y may not be in NP. For example if X = CIRCUIT-SAT and Y = CO-CIRCUIT-SAT then X and Y satisfy the conditions, but it is unknown whether Y is in NP. Note that the answer depends on our definition of poly-time reduction (to be Turing reductions and not Karp reductions).

  • Explain why the optimization version of the vertex cover problem is not necessarily a search problem.

    Answer: There does not appear to be an efficient way to certify that a purported solution is the best possible (even though we could use binary search on the search version of the problem to find the best solution).

    Web Exercises

    1. Subset sum. Given N positive integers and a target value V, determine if there is a subset whose sum is exactly V. Divide the integers into 4 equal groups. Enumerate and store all of the subset sums in each group by brute force. Let A, B, C, and D denote the subset sums of the four groups. The goal is to find integers a, b, c, and d such that a + b + c + d = V, where a is in A, b is in B, c is in C, and d is in D. Now, use a heap to enumerate the sums a + b where a is in A and b is in B. Simultaneously, use another heap to enumerate the sums c + d in decreasing order, where c is in C and d is in D.
    2. Sum of square roots. What is the minimum nonzero difference between two sums of square roots of integers? Given n and k, find the minimum positive value of
      | √a1 + √a2 + ... + √ak - √b1 - √b2 - ... - √bk |
      

      where ai and bi are between 0 and n. For example r(20, 2) = √10 + √11 - √5 - √18 and r(20, 3) = √5 + √6 + √18 - √4 - √12 - √12. Hint: enumerate all 2^(n/2) sums of square roots of the first n/2 integers and let that set be A, enumerate all 2^(n/2) sums of square roots of the last n/2 integers and let that be B. Now enumerate sums of a + b in sorted order, where a is in A and b is in B. Look for sums whose difference is very tiny.

    3. Dividing diamonds. Given N (around 36) class D diamonds, divide them into two groups so that they are as close in total weight to each other as possible. Assume the weights are real numbers (measured in carats).
    4. Hamilton path in DAG. Given a directed acyclic graph G, give an O(n+m)-time algorithm to test whether or not it is Hamiltonian. Hint: topological sort.
    5. Which of the following can we infer from the fact that the traveling salesperson problem is NP-complete, if we assume that P is not equal to NP?
      1. There does not exist an algorithm that solves arbitrary instances of the TSP problem.
      2. There does not exist an algorithm that efficiently solves arbitrary instances of the TSP problem.
      3. There exists an algorithm that efficiently solves arbitrary instances of the TSP problem, but no one has been able to find it.
      4. The TSP is not in P.
      5. All algorithms that are guaranteed to solve the TSP run in polynomial time for some family of input points.
      6. All algorithms that are guaranteed to solve the TSP run in exponential time for all families of input points.

      Answer: (b) and (d) only.

    6. Which of the following can we infer from the fact that PRIMALITY is in NP but not known to be NP-complete, if we assume that P is not equal to NP?
      1. There exists an algorithm that solves arbitrary instances of PRIMALITY.
      2. There exists an algorithm that efficiently solves arbitrary instances of PRIMALITY.
      3. If we found an efficient algorithm for PRIMALITY, we could immediately use it as a black box to solve TSP.

      Answer: We can infer only (a) since all problems in P are decidable. If P != NP, then there are problems in NP that are neither in P or NP-complete. PRIMALITY could be one of them (although this was recently disproved). Part (c) cannot be inferred since we don't know if PRIMALITY is NP-complete.

    7. Which of the following are NP-complete?
      1. The brute force TSP algorithm.
      2. The quicksort algorithm for sorting.
      3. The Halting problem.
      4. Hilbert's 10th problem.

      Answer: None. NP-completeness deals with *problems* not specific algorithm for problems. The Halting problem and Hilbert's 10th problem are undecidable, so they are not in NP (and all NP-complete problems are in NP).

    8. Let X and Y be two decision problems. Suppose we know that X reduces to Y. Which of the following can we infer?
      1. If Y is NP-complete then so is X.
      2. If X is NP-complete then so is Y.
      3. If Y is NP-complete and X is in NP then X is NP-complete.
      4. If X is NP-complete and Y is in NP then Y is NP-complete.
      5. X and Y can't both be NP-complete.
      6. If X is in P, then Y is in P.
      7. If Y is in P, then X is in P.

      Answer: (d) and (g) only. X reduces to Y means that if you had a black box to solve Y efficiently, you could use it to solve X efficiently. X is no harder than Y.

    9. Show that CIRCUIT-SAT reduces to CIRCUIT-DIFF. Hint: create a circuit with N inputs that always outputs 0.
    10. Show that CIRCUIT-DIFF reduces to CIRCUIT-SAT.
    11. Show that DETERMINANT is in NP: given an N-by-N integer matrix A, is det(A) = 0?

      Solution: certificate is a nonzero vector x such that Ax = 0.

    12. Show that FULL-RANK is in NP: given an N-by-N integer matrix A, is det(A) ≠ 0?

      Solution: certificate is an N-by-N inverse matrix B such that AB = I.

    13. Search problems vs. decision problems. We can formulate a search problem using a corresponding decision problem. For example, the problem of finding the prime factorization of an integer N can be formulate using the decision problem: given two integers N and and L, does N have a nontrivial factor strictly less than L. The search problem is solvable in polynomial time if and only if the corresponding decision problem is. To see why, we can efficiently find the smallest factor p of N by using different values of L along with binary search. Once we have the factor p, we can repeat the process on N/p.

      Usually we can show that the search problem and the decision problem are equivalent up to polynomial factors in running time. Papadimitriou (Example 10.8) gives an interesting counterexample to the rule. Given N positive integers such that their sum is less than 2^N - 1, find two subsets whose sum is equal. For example, the 10 numbers below sum to 1014 < 1023.

      23 47 59 88 91 100 111 133 157 205

      Since there are more subsets of N integers (2^N) than numbers between 1 and 1014, there must be two different subsets with the same sum. But nobody know a polynomial time algorithm for finding such a subset. On the other hand, the natural decision problem is trivial solvable in constant time: are there two subsets of numbers that sum to the same value?

    14. Pratt's primality certificate. Show that PRIMES is in NP. Use Lehmer's theorem (Fermat's Little Theorem Converse) which asserts that an integer p > 1 is prime if and only if there exists an integer x such that xN-1 = 1 (mod p) and x(p-1)/d ≠ 1 (mod p) for all prime divisors d of p-1. For example, if N = 7919, then the prime factorization of p-1 = 7918 = 2 × 37 × 107. Now x = 7 satisfies 77918 = 1 (mod 7919), but 77918/2 ≠ 1 (mod 7919), 77918/37 ≠ 1 (mod 7919), 77918/107 ≠ 1 (mod 7919). This proves that 7919 is prime (assuming that you recursively certify that 2, 37, and 107 are prime).
    15. Pell's equation. Find all positive integer solutions to Pell's equation: x^2 - 92y^2 = 1. Solution: (1151, 120), (2649601, 276240), etc. There are infinitely many solutions, but each successive one is about 2300 times the previous one.
    16. Pell's equation. In 1657, Pierre Fermat challenged his colleagues with the following problem: given a positive integer c, find a positive integer y such that cy2 is a perfect square. Fermat used c = 109. It turns out the smallest solution is (x, y) = (158,070,671,986,249, 15,140,424,455,100). Write a program Pell.java that reads in an integer c and finds the smallest solution to Pell's equation: x2 - c y2 = 1. Try c = 61. The smallest solution is (1,766,319,049, 226,153,980). For c = 313, the smallest solution is ( 3,218,812,082,913,484,91,819,380,158,564,160). The problem is provably unsolvable in a polynomial number of steps (as a function of the number of bits in the input c) because the output may require exponentially many bits!
    17. 3-COLOR reduced to 4-COLOR. Show that 3-COLOR polynomial reduces to 4-COLOR. Hint: given an instance G of 3-COLOR, create an instance G' of 4-COLOR by adding a special vertex x to G and connecting it to all of the vertices in G.
    18. 3-SAT is self-reducible. Show that 3-SAT is self-reducible. That is, given an oracle that answers whether or not any 3-SAT formula is satisfiable, design an algorithm that can find a satisfying assignment to a 3-SAT formula (assuming it is satisfiable). Your algorithm should run in polynomial time plus a polynomial number of calls to the oracle.
    19. 3-COLOR is self-reducible. Show that 3-COLOR is self-reducible. That is, given an oracle that answers whether or not any graph G is 3-colorable, design an algorithm that can 3-color a graph (assuming it is 3-colorable). Your algorithm should run in polynomial time plus a polynomial number of calls to the oracle.