#C9567. Maximum Conversions to One

    ID: 53674 Type: Default 1000ms 256MiB

Maximum Conversions to One

Maximum Conversions to One

You are given an array of integers. You can perform a sequence of operations to convert some of these elements into 1. The rules of the conversion are as follows:

If the array already contains any 1's, you may convert any other element to 1 at the cost of "using up" one conversion opportunity. In other words, if the number of elements that are not 1 is nonOne and the number of 1's is oneCount, you can ultimately convert at most

\( oneCount + nonOne - 1 \)

elements to 1. However, if the array does not initially contain a 1, then you must leave one element unchanged to enable any conversions, so the maximum number of 1's you can obtain is n - 1, where n is the size of the array.

Your task is to compute the maximum number of elements that can become 1 after performing any number of operations.

Note: When there is at least one 1 in the array, you can convert all other elements except one, resulting in a total of oneCount + nonOne - 1 ones. When there are no ones, you may convert all but one element (to allow the first conversion), which results in n - 1 ones. This rule applies even for an array of size 1.

inputFormat

The first line of input contains a single integer n (1 ≤ n ≤ 105) — the number of elements in the array. The second line contains n space-separated integers a1, a2, ..., an (1 ≤ ai ≤ 109).

For example:

5
2 3 4 5 6

outputFormat

Output a single integer — the maximum number of elements that can be converted to 1.

For example:

4
## sample
5
2 3 4 5 6
4