- 40.5%

https://leetcode.com/problems/single-number-ii/?tab=Description

Given an array of integers, every element appears three times except for one, which appears exactly once. Find that single one.

Note:

Your algorithm should have a linear runtime complexity. Could you implement it without using extra memory?

方法一：

统计每一位的个数，模3，然后就是答案了

方法二：

https://discuss.leetcode.com/topic/2031/challenge-me-thx

Challenge me , thx

1 | public int singleNumber(int[] A) { |

Detailed explanation and generalization of the bitwise operation method for single numbers

Statement of our problem: “Given an array of integers, every element appears k (k > 1) times except for one, which appears p times (p >= 1, p % k != 0). Find that single one.”

As others pointed out, in order to apply the bitwise operations, we should rethink how integers are represented in computers – by bits. To start, let’s consider only one bit for now. Suppose we have an array of 1-bit numbers (which can only be 0 or 1), we’d like to count the number of 1’s in the array such that whenever the counted number of 1 reaches a certain value, say k, the count returns to zero and starts over (In case you are curious, this k will be the same as the one in the problem statement above). To keep track of how many 1’s we have encountered so far, we need a counter. Suppose the counter has m bits in binary form: xm, …, x1 (from most significant bit to least significant bit). We can conclude at least the following four properties of the counter:

- There is an initial state of the counter, which for simplicity is zero;
- For each input from the array, if we hit a 0, the counter should remain unchanged;
- For each input from the array, if we hit a 1, the counter should increase by one;
- In order to cover k counts, we require 2^m >= k, which implies m >= logk.

Here is the key part: how each bit in the counter (x1 to xm) changes as we are scanning the array. Note we are prompted to use bitwise operations. In order to satisfy the second property, recall what bitwise operations will not change the operand if the other operand is 0? Yes, you got it: x = x | 0 and x = x ^ 0.

Okay, we have an expression now: x = x | i or x = x ^ i, where i is the scanned element from the array. Which one is better? We don’t know yet. So, let’s just do the actual counting:

At the beginning, all bits of the counter is initialized to zero, i.e., xm = 0, …, x1 = 0. Since we are gonna choose bitwise operations that guarantee all bits of the counter remain unchanged if we hit 0’s, the counter will be 0 until we hit the first 1 in the array. After we hit the first 1, we got: xm = 0, …,x2 = 0, x1 = 1. Let’s continue until we hit the second 1, after which we have: xm = 0, …, x2 = 1, x1 = 0. Note that x1 changed from 1 to 0. For x1 = x1 | i, after the second count, x1 will still be 1. So it’s clear we should use x1 = x1 ^ i. What about x2, …, xm? The idea is to find the condition under which x2, …, xm will change their values. Take x2 as an example. If we hit a 1 and need to change the value of x2, what must be the value of x1 right before we do the change? The answer is: x1 must be 1 otherwise we shouldn’t change x2 because changing x1 from 0 to 1 will do the job. So x2 will change value only if x1 and i are both 1, or mathematically, x2 = x2 ^ (x1 & i). Similarly xm will change value only when xm-1, …, x1 and i are all 1: xm = xm ^ (xm-1 & … & x1 & i). Bingo, we’ve found the bitwise operations!

However, you may notice that the bitwise operations found above will count from 0 until 2^m - 1, instead of k. If k < 2^m - 1, we need some “cutting” mechanism to reinitialize the counter to 0 when the count reaches k. To this end, we apply bitwise AND to xm,…, x1 with some variable called mask, i.e., xm = xm & mask, …, x1 = x1 & mask. If we can make sure that mask will be 0 only when the count reaches k and be 1 for all other count cases, then we are done. How do we achieve that? Try to think what distinguishes the case with k count from all other count cases. Yes, it’s the count of 1’s! For each count, we have unique values for each bit of the counter, which can be regarded as its state. If we write k in its binary form: km,…, k1. we can construct mask as follows:

1 | mask = ~(y1 & y2 & ... & ym), where yj = xj if kj = 1 and yj = ~xj if kj = 0 (j = 1 to m). |

1 | Let's do some examples: |

In summary, our algorithm will go like this:

1 | for (int i : array) { |

Now it’s time to generalize our results from 1-bit number case to 32-bit integers. One straightforward way would be creating 32 counters for each bit in the integer. You’ve probably already seen this in other posted codes. But if we take advantage of bitwise operations, we may be able to manage all the 32 counters “collectively”. By saying “collectively” we mean using m 32-bit integers instead of 32 m-bit counters, where m is the minimum integer that satisfies m >= logk. The reason is that bitwise operations apply only to each bit so operations on different bits are independent of each other(kind obvious, right?). This allows us to group the corresponding bits of the 32 counters into one 32-bit integer (for schematic steps, see comments below). Since each counter has m bits, we end up with m 32-bit integers. Therefore, in the algorithm developed above, we just need to regard x1 to xm as 32-bit integers instead of 1-bit numbers and we are done. Easy, hum?

The last thing is what value we should return, or equivalently which one of x1 to xm will equal the single element. To get the correct answer, we need to understand what the m 32-bit integers x1 to xm represent. Take x1 as an example. x1 has 32 bits and let’s label them as r (r = 1 to 32), After we are done scanning the input array, the value for the r-th bit of x1 will be determined by the r-th bit of all the elements in the array (more specifically, suppose the total count of 1 for the r-th bit of all the elements in the array is q, q’ = q % k and in its binary form: q’m,…,q’1, then by definition the r-th bit of x1 will be equal to q’1). Now you can ask yourself this question: what does it imply if the r-th bit of x1 is 1?

The answer is to find what can contribute to this 1. Will an element that appears k times contribute? No. Why? Because for an element to contribute, it has to satisfy at least two conditions at the same time: the r-th bit of this element is 1 and the number of appearance of this 1 is not an integer multiple of k. The first condition is trivial. The second comes from the fact that whenever the number of 1 hit is k, the counter will go back to zero, which means the corresponding bit in x1 will be reset to 0. For an element that appears k times, it’s impossible to meet these two conditions simultaneously so it won’t contribute. At last, only the single element which appears p (p % k != 0) times will contribute. If p > k, then the first k * [p/k] ([p/k]denotes the integer part of p/k) single elements won’t contribute either. Then we can always set p’ = p % k and say the single element appears effectively p’ times.

Let’s write p’ in its binary form: p’m, …, p’1. (note that p’ < k, so it will fit into m bits). Here I claim the condition for x1 to equal the single element is p’1 = 1. Quick proof: if the r-th bit of x1 is 1, we can safely say the r-th bit of the single element is also 1. We are left to prove that if the r-th bit of x1 is 0, then the r-th bit of the single element can only be 0. Just suppose in this case the r-th bit of the single element is 1, let’s see what will happen. At the end of the scan, this 1 will be counted p’ times. If we write p’ in its binary form: p’m, …, p’1, then by definition the r-th bit of x1 will equal p’1, which is 1. This contradicts with the presumption that the r-th bit of x1 is 0. Since this is true for all bits in x1, we can conclude x1 will equal the single element if p’1 = 1. Similarly we can show xj will equal the single element if p’j = 1 (j = 1 to m). Now it’s clear what we should return. Just express p’ = p % k in its binary form and return any of the corresponding xj as long as p’j = 1.

In total, the algorithm will run in O(n * logk) time and O(logk) space.

Hope this helps!

Here is a list of few quick examples to show how the algorithm works:

- k = 2, p = 1.

k is 2, then m = 1, we need only one 32-bit integer(x1) as the counter. And 2^m = k so we do not even need a mask!

A complete java program will look like:

1 | public int singleNumber(int[] A) { |

- k = 3, p = 1.

k is 3, then m = 2, we need two 32-bit integers(x2, x1) as the counter. And 2^m > k so we do need a mask. Write k in its binary form: k = ‘11’, then k1 = 1, k2 = 1, so we have mask = ~ (x1 & x2).

A complete java program will look like:

1 | public int singleNumber(int[] A) { |

- k = 5, p = 3.

k is 5, then m = 3, we need three 32-bit integers(x3, x2, x1) as the counter. And 2^m > k so we need a mask. Write k in its binary form: k = ‘101’, then k1 = 1, k2 = 0, k3 = 1, so we have mask = ~(x1 & ~x2 & x3).

A complete java program will look like:

1 | public int singleNumber(int[] A) { |

You can easily come up with other examples. If you have any questions about the explanation, please let me know. I would appreciate your feedback. Thanks!

https://discuss.leetcode.com/topic/22821/an-general-way-to-handle-all-this-sort-of-questions

An General Way to Handle All this sort of questions.

this kind of question the key idea is design a counter that record state. the problem can be every one occurs K times except one occurs M times. for this question, K =3 ,M = 1(or 2) .

so to represent 3 state, we need two bit. let say it is a and b, and c is the incoming bit.

then we can design a table to implement the state move.

1 | current incoming next |

like circuit design, we can find out what the next state will be with the incoming bit.( we only need find the ones)

then we have for a to be 1, we have

1 | current incoming next |

and this is can be represented by

1 | a=a&~b&~c + ~a&b&c |

and b can do the same we , and we find that

1 | b= ~a&b&~c+~a&~b&c |

and this is the final formula of a and b and just one of the result set, because for different state move table definition, we can generate different formulas, and this one is may not the most optimised. as you may see other’s answer that have a much simple formula, and that formula also corresponding to specific state move table. (if you like ,you can reverse their formula to a state move table, just using the same way but reversely)

for this questions we need to find the except one

as the question don’t say if the one appears one time or two time ,

so for ab both

1 | 01 10 => 1 |

we should return a|b;

this is the key idea , we can design any based counter and find the occurs any times except one .

here is my code. with comment.

1 | public class Solution { |

this is a general solution . and it comes from the Circuit Design on course digital logic.

Java O(n) easy to understand solution, easily extended to any times of occurance

The usual bit manipulation code is bit hard to get and replicate. I like to think about the number in 32 bits and just count how many 1s are there in each bit, and sum %= 3 will clear it once it reaches 3. After running for all the numbers for each bit, if we have a 1, then that 1 belongs to the single number, we can simply move it back to its spot by doing ans |= sum << i;

This has complexity of O(32n), which is essentially O(n) and very easy to think and implement. Plus, you get a general solution for any times of occurrence. Say all the numbers have 5 times, just do sum %= 5.

1 | public int singleNumber(int[] nums) { |

Accepted code with proper Explaination. Does anyone have a better idea?

The code makes use of 2 variables.

ones - At any point of time, this variable holds XOR of all the elements which have

appeared “only” once.

twos - At any point of time, this variable holds XOR of all the elements which have

appeared “only” twice.

So if at any point time,

- A new number appears - It gets XOR’d to the variable “ones”.
- A number gets repeated(appears twice) - It is removed from “ones” and XOR’d to the variable “twos”.
- A number appears for the third time - It gets removed from both “ones” and “twos”.

The final answer we want is the value present in “ones” - coz, it holds the unique element.

So if we explain how steps 1 to 3 happens in the code, we are done.

Before explaining above 3 steps, lets look at last three lines of the code,

common_bit_mask = ~ (ones & twos)

ones & = common_bit_mask

twos & = common_bit_mask

All it does is, common 1’s between “ones” and “twos” are converted to zero.

For simplicity, in all the below explanations - consider we have got only 4 elements in the array (one unique element and 3 repeated elements - in any order).

**Explanation for step 1**

Lets say a new element(x) appears.

CURRENT SITUATION - Both variables - “ones” and “twos” has not recorded “x”.

Observe the statement “twos| = ones & x”.

Since bit representation of “x” is not present in “ones”, AND condition yields nothing. So “twos” does not get bit representation of “x”.

But, in next step “ones ^= x” - “ones” ends up adding bits of “x”. Thus new element gets recorded in “ones” but not in “twos”.

The last 3 lines of code as explained already, converts common 1’s b/w “ones” and “twos” to zeros.

Since as of now, only “ones” has “x” and not “twos” - last 3 lines does nothing.

**Explanation for step 2.**

Lets say an element(x) appears twice.

CURRENT SITUATION - “ones” has recorded “x” but not “twos”.

Now due to the statement, “twos| = ones & x” - “twos” ends up getting bits of x.

But due to the statement, “ones ^ = x” - “ones” removes “x” from its binary representation.

Again, last 3 lines of code does nothing.

So ultimately, “twos” ends up getting bits of “x” and “ones” ends up losing bits of “x”.

**Explanation for step 3.**

Lets say an element(x) appears for the third time.

CURRENT SITUATION - “ones” does not have bit representation of “x” but “twos” has.

Though “ones & x” does not yield nothing .. “twos” by itself has bit representation of “x”. So after this statement, “two” has bit representation of “x”.

Due to “ones^=x”, after this step, “one” also ends up getting bit representation of “x”.

Now last 3 lines of code removes common 1’s of “ones” and “twos” - which is the bit representation of “x”.

Thus both “ones” and “twos” ends up losing bit representation of “x”.

1 | class Solution { |

https://discuss.leetcode.com/topic/23584/a-general-c-solution-for-these-type-problems

A general C++ solution for these type problems

There are so many brilliant solutions for this problem used “| & ^ ~”, and I have learned a lot from these solutions. Here is a general solution for who not familiar with “| & ^ ~”.

Q: Most elements appeared k times, except one. Find this “one”.

1 | int singleNumber(vector<int>& s) |

My own explanation of bit manipulation method, might be easier to understand

Consider the following fact:

Write all numbers in binary form, then for any bit 1 that appeared 3*n times (n is an integer), the bit can only present in numbers that appeared 3 times

e.g. 0010 0010 0010 1011 1011 1011 1000 (assuming 4-bit integers)

2(0010) and 11(1011) appeared 3 times, and digit counts are:

1 | Digits 3 2 1 0 |

Counts on 2,1,0 are all times of 3, the only digit index that has Counts % 3 != 0 is 3

Therefore, to find the number that appeared only 1 or 2 times, we only need to extract all bits that has Counts %3 != 0

Now consider how we could do this by bit manipulation

since counts % 3 has only 3 states: 0(00),1(01),2(10)

we could use a TWO BIT COUNTER (Two, One) to represent Counts % 3, now we could do a little research on state transitions, for each bit, let B be the input bit, we can enumerate the all possible state transitions, Two+, One+ is the new state of Two, One. (here we need to use some knowledge in Digital Logic Design)

1 | Two One B Two+ One+ |

We could then draw the Karnaugh map to analyze the logic (https://en.wikipedia.org/wiki/Karnaugh_map), and then we get:

1 | One+ = (One ^ B) & (~Two) |

Now for int_32, we need only 2 int_32 two represent Two and One for each bit and update Two and One using the rules derived above

Code is here (C++):

1 | class Solution { |

https://discuss.leetcode.com/topic/17629/the-simplest-solution-ever-with-clear-explanation

The simplest solution ever with clear explanation

The key to solve this problem is the count of 1s of each bit of all numbers.

Take one bit number for example: nums = [1, 1, 1, 0, 0, 0, …, x] . All numbers are 0 or 1.

We know that every number appears three times except for just one number. So, if the count of 1s in nums is 0, 3, 6, …, 3 * n, then the single number is 0. And if the count of 1s in nums is 1, 4, 7, …, 3\*n+1, then the single number is 1.

So, for an array “ nums “ that contains only 0 or 1, the code to find the single number are:

1 | count = 0 |

To make “count” less than 3, mod “count” with 3 in every loop.

Below is the procedure for finding the single number in [1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0]:

1 | Table 1: |

So the single number is 1.

We can write the calculate table for expression “count’ = (count + num) % 3”:

1 | Table 2: |

To extend this algorithm to 32bits number. We need to rewrite these code to bit operation expressions.

And the key is rewriting the expression “ count’ = (count + num) % 3 “ to bit operation expressions.

Write binary numbers of “ count “ and “ count’ “ in “Table 2”. And split their bits into two column:

1 | Table 3: |

Here comes the hardest part of this solution.

“Table 3” is a truth table, we need to use it to find the formulas to calculate “ b0’ “ and “ b1’ “:

1 | b0' = f(b1, b0, num) |

With observations, guesses, experiments and even some luck. Finally I got two simple and elegant formulas:

1 | b0' = (b0 ^ num) & (~b1) |

The AC code:

1 | class Solution: |