動的プログラミングとは何ですか?


33

この質問が愚かに聞こえる場合は、事前に申し訳ありません...

私の知る限り、動的プログラミングを使用したアルゴリズムの構築は次のように機能します。

  1. 問題を再帰関係として表現します。
  2. メモ化またはボトムアップアプローチのいずれかを使用して、再帰関係を実装します。

私の知る限り、動的プログラミングに関するすべてを述べてきました。つまり、動的プログラミングでは、再帰関係を表現したり、コードに変換したりするためのツール/ルール/メソッド/定理は提供されません。

それでは、動的プログラミングの特別な点は何ですか?ある種の問題に取り組むための漠然とした方法以外に、それはあなたに何を与えますか?


11
歴史的なファクトイド(このコメントは役に立ちませんが、ダイナミックプログラミングの理論を重視したい場合は、実際にはベルマンが良いリードです)なぜなら、当時は純粋に理論的な仕事が雇用主と一緒に飛んでくれなかったので、彼は軽pe的な方法で使用することができなかった、もっと風変わりな何かを必要としていたからです
G.バッハ

3
私が知る限り、それはまさにあなたが言及したこれらの2つのポイントです。重複する副問題のために指数関数的な爆発を回避する場合、それは特別になります。それで全部です。ああ、ところで、私の教授は「曖昧な方法」よりも「アルゴリズムのパラダイム」を好みます。
ヘンドリック

「動的プログラミング」は、主に流行語であるようです(それ以来、流行はなくなりました)。もちろん、それが役に立たないというわけではありません。
user253751

3
答えに値するものではありませんが、動的プログラミングは間違いなく「問題を再帰的に解決しようとするときに使用するものですが、同じサブ問題を何度も何度も見直すことになります」。
ホッブズ

@hobbs:正確ですが、スキルは時間を無駄にする最初の方法を見つけることにあります;)
j_random_hacker

回答:


27

動的プログラミングにより、アルゴリズム設計について考えることができます。これはしばしば非常に役立ちます。

メモ化およびボトムアップメソッドは、繰り返しの関係をコードに変換するためのルール/メソッドを提供します。メモ化は比較的単純なアイデアですが、最高のアイデアはしばしばそうです!

動的プログラミングは、アルゴリズムの実行時間を考えるための構造化された方法を提供します。実行時間は、基本的に2つの数値で決まります。解決する必要のあるサブ問題の数と、各サブ問題の解決にかかる時間です。これにより、アルゴリズム設計の問題について考える便利で簡単な方法が提供されます。再帰関係の候補がある場合は、それを見て、実行時間を非常にすばやく把握できます(たとえば、多くの場合、サブ問題の数は非常にすばやくわかります。これは、実行時間;指数関数的に多くのサブ問題を解決しなければならない場合、再発はおそらく良いアプローチではありません)。これは、サブ問題の分解候補の除外にも役立ちます。たとえば、文字列、接頭辞 S [ 1 .. i ]または接尾辞 S [ j n ]または部分文字列 S [ i j ]は合理的かもしれません(部分問題の数は nの多項式です)が、 Sの部分列で部分問題を定義するのは良いアプローチではないでしょう(部分問題の数は nで指数関数的です)。これにより、再発の可能性がある「検索スペース」を整理できます。S[1..n]S[1..i]S[j..n]S[i..j]nSn

Dynamic programming gives you a structured approach to look for candidate recurrence relations. Empirically, this approach is often effective. In particular, there are some heuristics/common patterns you can recognize for common ways to define subproblems, depending on the type of the input. For instance:

  • If the input is a positive integer n, one candidate way to define a subproblem is by replacing n with a smaller integer n (s.t. 0nn).

  • S[1..n]S[1..n] with a prefix S[1..i]; replace S[1..n] with a suffix S[j..n]; replace S[1..n] with a substring S[i..j]. (Here the subproblem is determined by the choice of i,j.)

  • If the input is a list, do the same as you'd do for a string.

  • If the input is a tree T, one candidate way to define a subproblem is to replace T with any subtree of T (i.e., pick a node x and replace T with the subtree rooted at x; the subproblem is determined by the choice of x).

  • (x,y)xy(x,y)(x,y) where x is a subproblem for x and y is a subproblem for y. (You can also consider subproblems of the form (x,y) or (x,y).)

And so on. This gives you a very useful heuristic: just by looking at the type signature of the method, you can come up with a list of candidate ways to define subproblems. In other words, just by looking at the problem statement -- looking only at the types of the inputs -- you can come up with a handful of candidate ways to define a subproblem.

This is often very helpful. It doesn't tell you what the recurrence relation is, but when you have a particular choice for how to define the subproblem, often it's not too hard to work out a corresponding recurrence relation. So, it often turns design of a dynamic programming algorithm into a structured experience. You write down on scrap paper a list of candidate ways to define subproblems (using the heuristic above). Then, for each candidate, you try to write down a recurrence relation, and evaluate its running time by counting the number of subproblems and the time spent per subproblem. After trying each candidate, you keep the best one that you were able to find. Providing some structure to the algorithm design process is a major help, as otherwise algorithm design can be intimidating (there's such a huge space of possible approaches, without some structure it can be unclear how to even get started).


So you confirm that dynamic programming does not provide with concrete "procedures" to follow. It's just "a way to think", as you said. Note that I'm not arguing that DP is useless (on the contrary!), I'm just trying to understand if there is something that I am missing or if I should just practice more.
hey hey

@heyhey, well, yes... and no. See my revised answer for more elaboration. It's not a silver bullet, but it does provide some semi-concrete procedures that are often helpful (not guaranteed to work, but often do prove helpful).
D.W.

Many thanks! By practicing I am getting more and more familiar with some of those "semi-concrete procedures" you are describing.
hey hey

"if there are exponentially many subproblems you have to solve, then the recurrence probably won't be a good approach". For many problems there is no known polynomial time algorithm. Why should this be a criterion for using DP?
Chiel ten Brinke

@Chiel, it's not a criterion for using DP. If you have a problem where you would be happy with an exponential-time algorithms, then you can ignore that particular parenthetical remark. It's just an example to try to illustrate the general point I was making -- not something you should take too seriously or interpret as a hard-and-fast rule.
D.W.

9

Your understanding of dynamic programming is correct (afaik), and your question is justified.

I think the additional design space we get from the kind of recurrences we call "dynamic programming" can best be seen in comparison to other schemata of recursive approaches.

Let's pretend our inputs are arrays A[1..n] for the sake of highlighting the concepts.

  1. Inductive Approach

    Here the idea is to make your problem smaller, solve the smaller version and derive a solution for the original one. Schematically,

    f(A)=g(f(A[1..nc]),A)

    with g the function/algorithm that translates the solution.

    Example: Finding superstars in linear time

  2. Divide & Conquer

    Partition the input into several smaller parts, solve the problem for each and combine. Schematically (for two parts),

    f(A)=g(f(A[1..c]),f(A[c+1..n]),A).

    Examples: Merge-/Quicksort, Shortest pairwise distance in the plane

  3. Dynamic Programming

    Consider all ways of partitioning the problem into smaller problems and pick the best. Schematically (for two parts),

    f(A)=best{g(f(A[1..c]),f(A[c+1..n]))|1cn1}.

    Examples: Edit distance, Change-making problem

    Important side note: dynamic programming is not brute force! The application of best in every step reduces the search space considerably.

In a sense, you know less and less statically going from top to bottom, and have to make more and more decisions dynamically.

The lesson from learning about dynamic programming is that it is okay to try all possible partitionings (well, it's required for correctness) because it can still be efficient using memoization.


"Pruned Dynamic Programming" (when it applies) proves that trying all possibilities is NOT required for correctness.
Ben Voigt

@BenVoigt Of course. I remained deliberately vague about what "all ways to partition" means; you want to rule out as many as possible, of course! (However, even if you try all ways of partitioning you don't get brute force since you only ever investigate combinations of optimal solutions to subproblems, whereas brute-force would investigate all combinations of all solutions.)
Raphael


5

Dynamic Programming allows you to trade memory for computation time. Consider the classic example, Fibonacci.

Fibonacci is defined by the recurrence Fib(n)=Fib(n1)+Fib(n2). If you solve using this recursion, you end up doing O(2n) calls to Fib(), since the recursion tree is a binary tree with height n.

Instead, you want to calculate Fib(2), then use this to find Fib(3), use that to find Fib(4), etc. This only takes O(n) time.

DP also provides us with basic techniques for translating a recurrence relation into a bottom-up solution, but these are relatively straightforward (and generally involve using an m dimensional matrix, or a frontier of such a matrix, where m is the number of parameters in the recurrence relation). These are well explained in any text about DP.


1
You talk only about the memoization part, which misses the point of the question.
Raphael

1
"Dynamic Programming allows you to trade memory for computation time" is not something I heard when doing undergrad, and it's a great way to look at this subject. This is an intuitive answer with a succinct example.
trueshot

@trueshot: Except that sometimes dynamic programming (and particularly, "Pruned Dynamic Programming") is able to reduce both time and space requirements.
Ben Voigt

@Ben I didn't say it was a one-to-one trade. You can prune a recurrence tree as well. I posit that I did answer the question, which was, "What does DP get us?" It gets us faster algorithms by trading space for time. I agree that the accepted answer is more thorough, but this is valid as well.
Kittsil

2

Here is another slightly different way of phrasing what dynamic programming gives you. Dynamic programming collapses an exponential number of candidate solutions into a polynomial number of equivalence classes, such that the candidate solutions in each class are indistinguishable in some sense.

Let me take as an example the problem of finding the number of increasing subsequences of length k in an array A of lenght n. It is useful to partition the set of all subsequences into equivalence classes such that two subsequences belong to the same class if and only if they have the same length and end in the same index. All of the 2n possible subsequences belong to exactly one of the O(n2) equivalence classes. This partitioning preserves enough information so that we can define a recurrence relation for the sizes of the classes. If f(i,) gives the number of subsequences which end in index i and have length , then we have:

f(i,)=j<i such thatA[j]<A[i]f(j,1)
f(i,1)=1 for all i=1n

This recurrence solves the problem in time O(n2k).

弊社のサイトを使用することにより、あなたは弊社のクッキーポリシーおよびプライバシーポリシーを読み、理解したものとみなされます。
Licensed under cc by-sa 3.0 with attribution required.