Number of 1 Bits
Calculating the Hamming weight of a number means counting the number of ‘1’ bits in its binary representation. You can calculate it using a simple loop and bitwise operations.
Here’s a Python code that takes an integer n
as input and returns the number of ‘1’ bits in its binary representation:


Explanation:
 Bitwise AND with 1: Using
(n & 1)
, we check if the last bit ofn
is 1. If the last bit is 1, the result of(n & 1)
will be 1, otherwise, it will be 0.  Right Shift: We then shift the bits of
n
one step to the right using the right shift operator>>
. This is equivalent to dividing the number by 2, and it discards the last bit.  Repeat 32 times: Since the binary string’s length is 32, we repeat this process 32 times.
The final count will be the number of ‘1’ bits in the binary representation of n
.
=begin
0000 0000 0000 0000 0000 0000 0000 1011 0000 0000 0000 0000 0000 0000 0000 1000
Loop through every digit of the input number n
Time: O(n) Space: O(1)




Problem Classification
Binary Number Operations: The problem involves performing operations on the binary representation of a number.
Bit Manipulation: It requires understanding and manipulating the bits of a number.
Counting: The task requires counting the number of set bits (1s) in a number’s binary representation.
Language Agnostic Coding Drills
Breaking down this problem, we can identify several key areas that can be practiced separately:
Understanding Binary Representation: Understanding that numbers are represented as a sequence of bits in a computer.
Bitwise AND Operation: Learn how to use the bitwise AND operation to check if a particular bit in a binary number is set (1) or not set (0).
Bit Shifting Operation: Get familiar with the bit shifting operation. Here you need to understand the right shift operation (»). The right shift operation moves bits to the right, discarding the rightmost bit and copying the leftmost bit on the left. For example, in the binary representation, 1011 (decimal 11) right shifted is 101 (decimal 5).
Looping over Bits: Understand how to create a loop that iterates over each bit in a binary number. In this case, you’ll continue shifting bits to the right until there are no more bits left (i.e., until the number is zero).
Counting Set Bits: This involves applying all the previous drills together to count the number of set bits (1s) in the binary representation of a number.
Problemsolving approach:
Given a number, the problem asks to count the number of ‘1’s in its binary representation. To solve this problem:
 Initialize a count to keep track of the number of ‘1’s.
 Create a while loop that continues as long as the number is not zero.
 In each iteration, perform a bitwise AND operation with 1. If the result is 1, it means the rightmost bit in the binary representation of the number is 1, so increment the count.
 Right shift the number by 1. This moves all bits one position to the right, essentially dropping the rightmost bit and introducing a 0 on the left.
 Repeat steps 3 and 4 until the number becomes 0.
 Return the count, which now represents the number of ‘1’s in the binary representation of the input number.
Targeted Drills in Python
1. Understanding Binary Representation:


2. Bitwise AND Operation:


3. Bit Shifting Operation:


4. Looping over Bits:


5. Counting Set Bits:


Each of these drills is designed to help you understand the concepts used in the final solution. Understand what each piece is doing, and then you will see how they all fit together in the final solution.
Q&A
Hello I’m writing in C# and can some1 explain me why this code does not work?
return n.ToString().Where(n => n == ‘1’).Count();
The code snippet you provided attempts to convert an integer n
to a string, filter out the characters that are ‘1’, and then count the number of ‘1’ characters.
The issue is that you’re using n
as the parameter name in your lambda expression, which is shadowing the original n
variable. This can lead to confusion and potential runtime errors.
To fix this, you should use a different variable name for the lambda parameter. Here’s the corrected code:


In this corrected code, c
is used as the variable name in the lambda expression. This c
represents each character in the string representation of n
. The Where
method filters the characters, keeping only those that are ‘1’. The Count
method then counts the number of ‘1’ characters.
class Solution: def hammingWeight(self, n: int) > int: n = str(n) count = 0 for i in n: if i == “1”: count += 1 return count
Why is this not working
Your code is counting the number of “1"s in the decimal representation of n
, not its binary representation. The Hamming weight of a number refers to the number of “1"s in its binary representation.
To correctly calculate the Hamming weight, you should first convert n
to its binary representation using the bin
function. Here is the corrected code:


In this corrected code, bin(n)
converts n
to a binary string. The binary string starts with ‘0b’, so you could further improve your code by slicing off the first two characters:


This will correctly calculate the Hamming weight of n
.
why this code is not working ? but its working in other platforms class Solution: def hammingWeight(self, n: int) > int: n = str(n).replace(“0”,””) return(len(n))
The code you posted is converting the integer n
to a string, replacing all “0"s with an empty string, and then returning the length of the resulting string. However, this will not give you the Hamming weight of n
, which is the number of “1"s in the binary representation of n
.
The code might be working on other platforms if the input n
is already a binary string representation of a number. But in the function signature, n
is expected to be an integer, not a binary string.
If n
is an integer, the code you posted would effectively be counting the number of nonzero digits in the decimal representation of n
, which is not the same as the Hamming weight.
Here is the corrected code that correctly computes the Hamming weight:


In this corrected code, bin(n)[2:]
is used to convert n
to its binary representation (as a string), and the ‘0b’ prefix is removed. The rest of the code then works as intended, removing all “0"s and returning the number of “1"s.
I roamed through all the answers. I am wondering whether there is any other way than bitwise operations?
Yes, there are various ways to count the number of ‘1’s in the binary representation of an integer that don’t involve bitwise operations. Here are a few alternatives:
Using the
bin
function andcount
method in Python:You can convert the integer to its binary representation using the
bin
function and then count the number of ‘1’s using thecount
method:1 2 3
class Solution: def hammingWeight(self, n: int) > int: return bin(n).count('1')
Using a while loop and integer division:
You can repeatedly divide the integer by 2 until it becomes 0, and count the number of times the remainder is 1:
1 2 3 4 5 6 7
class Solution: def hammingWeight(self, n: int) > int: count = 0 while n: count += n % 2 n //= 2 return count
Using builtin
format
function:You can use the
format
function to convert the integer to its binary representation and then count the number of ‘1’s:1 2 3
class Solution: def hammingWeight(self, n: int) > int: return format(n, 'b').count('1')
All of these approaches avoid bitwise operations and directly count the number of ‘1’s in the binary representation of the integer.
In Javascript, why does 4294967293 » 1 result in 2? This is making Test Case 3 fail. I’m very confused.
JavaScript uses 32bit signed numbers for bitwise operations. This means that when you perform a bitwise operation, JavaScript first converts the number to a 32bit signed number, then performs the operation.
The number 4294967293 is represented in binary as 10000000000000000000000000000000101
. When JavaScript converts this to a 32bit signed number, it only keeps the last 32 bits, which are 00000000000000000000000000000001
. This is the binary representation of 2 in two’s complement, which is how negative numbers are represented in binary.
Therefore, when you perform the operation 4294967293 >> 1
in JavaScript, it first converts the number to 2
, and then right shifts it by 1. The result of right shifting 2
by 1 is still 2
, because right shifting a negative number in two’s complement always rounds towards negative infinity.
If you want to perform bitwise operations on larger numbers in JavaScript, you’ll need to use a BigInt or a library that supports larger integers.
class Solution {
public:
int hammingWeight(uint32_t n) {
string s=to_string(n);
int count=0;
for(int i = 0; i < s.length(); i++){
if(s[i] == '1'){
count++;
}
}
return count;
}
};
what’s wrong with this solution? I am getting wrong Output against this code? Please Help
The problem with your current approach is that to_string(n)
is converting n
to a string representation of the decimal value, not the binary value. Thus, when you’re counting ‘1’s, you’re counting in the decimal representation, not the binary.
In C++, you can get the binary representation of a number using bitwise operations, or by using bitset from the standard library.
Here’s how you might modify your code to correctly count the number of ‘1’s in the binary representation of n
using bitset:


This code first converts n
into a 32bit binary representation using std::bitset<32>
, and then uses the count
method of std::bitset
to count the number of ‘1’s.
Isn’t the input in example #1 = 11 and not binary representation of 11 ie. 00000000000000000000000000001011? It is confusing.
In the context of Hamming Weight, the input integer is typically understood to be in binary format. For example, if the input is 11, it’s understood to represent the binary number (1011_2) (which is (11_{10}) in decimal), and the Hamming Weight would be 3, because there are three ‘1’s in the binary representation.
However, the way the question is posed or the input is provided might sometimes lead to confusion. It’s important to read the problem statement carefully. If the problem states that the input is a decimal number and asks for the count of ‘1’s in its binary representation, then you should first convert the decimal number to binary and then count the ‘1’s.
If you’re dealing with a programming problem or a piece of code and you’re unsure about the format of the input, it might be helpful to print the input or use a debugger to check its actual value.
Please anyone explain that whether we are taking unsigned integer as input or its binary representation as when i am using statement cout « n i am getting decimal number but input in test case section is in binary representation.
When you’re dealing with integers in most programming languages, including C++ and Python, you’re usually working with decimal integers by default. When you print an integer variable with a statement like cout << n
, it will output the decimal representation of the integer.
However, when a problem involves binary operations or refers to bits, it often means that you need to consider the binary representation of the integer. The integer is still stored in memory as a binary number, but when you work with it in your code or print it out, it’s normally treated as a decimal number.
For example, if you have an integer n
with a value of 11
, that’s 11
in decimal and 1011
in binary. The binary representation is what’s relevant for calculating the Hamming weight (the number of ‘1’ bits), but if you just print n
, you’ll see 11
, not 1011
.
If you’re given an integer in a problem and it’s not clear whether it’s supposed to be a binary or decimal number, you should check the problem statement. The statement should specify the format of the number. In many problems that involve binary operations, the input might be given as a decimal number, but you’re supposed to consider its binary representation for the problem.
if we are giving input in form of integer …..then if we use n&1….won’t it consider the input n as a decimal and change such a big number into a binary one ?
In programming, the number you’re working with is stored in binary format, whether it’s an integer, float, or another type. When you write a number in your code or input it into your program, you’re usually writing it in decimal form (unless specified otherwise), but it’s stored in memory in binary form. This is true whether the number is small or large.
When you perform a bitwise operation like n & 1
, it operates on the binary representation of n
. The operation n & 1
will return the least significant bit of n
— that is, it will tell you whether n
is odd (if the result is 1) or even (if the result is 0). This is because the least significant bit in the binary representation of an integer determines its parity.
So, to answer your question, yes, the bitwise operation n & 1
considers n
in binary, but it doesn’t “change” n
into binary — n
is already stored in binary in the computer’s memory. The operation doesn’t change the way n
is stored or represented; it just uses the binary representation to calculate the result.
A problem with Java input! I use common solution with bit shifting but for testcase with “11111111111111111111111111111101” it return error. In IDE compiler also return error: error: integer number too large hammingWeight(1111111111111111111111111111101) ______________^ Does anyone know what’s wrong with input?
The error you’re seeing is because the number you’ve written is being interpreted as a decimal number, and it’s larger than the maximum value that can be represented by an int
in Java.
If you want to write a binary literal in your code in Java, you should prefix it with 0b
or 0B
. So if you want to represent the binary number 11111111111111111111111111111101
, you should write it as 0b11111111111111111111111111111101
.
However, there’s another issue. Java uses signed integers, and the sign bit is the leftmost bit. If it’s 1, the number is negative, and if it’s 0, the number is positive. When you write 0b11111111111111111111111111111101
, the sign bit is 1, so Java interprets this as a negative number.
To represent large binary numbers that have their sign bit set, you can use a long and suffix it with L
:


But keep in mind that the problem statement specifies the function should take an unsigned integer. In Java 8 and later, you can use Integer.parseUnsignedInt
with radix 2 to get the unsigned integer value of a binary string:


This will correctly interpret the binary string as an unsigned integer, so you can perform bitwise operations on it and calculate the Hamming weight.
I just tried a method to count the number of ones in the integer by converting it to string but the code runs and only one test case passed. Please help me out


The issue with your code is that the Integer.toString(n)
function is converting n
to a decimal string, not a binary string. When you’re counting ‘1’s, you’re counting them in the decimal representation, not the binary representation, which is why the code is not working as expected.
To convert n
to a binary string in Java, you should use the Integer.toBinaryString(n)
function. Here’s how you can modify your code to correctly count the number of ‘1’s in the binary representation of n
:


This code first converts n
into a binary string. It then iterates over each character in the string, incrementing the counter c
each time it encounters a ‘1’. The function then returns the count of ‘1’s in the binary representation of n
.
why is 0000000000000000000000000000101%10 a 5 and 101%10 a 1 ?
The operation %
is the modulus operator, which gives the remainder when the first number is divided by the second.
When you perform 0000000000000000000000000000101 % 10
, the number 0000000000000000000000000000101
is a binary number, and in decimal format, it is 101
. So, 101 % 10
gives a remainder of 1
, not 5
.
However, if you are treating 0000000000000000000000000000101
as a decimal number (not binary), then it is indeed equal to 101
. In that case, 101 % 10
would give a remainder of 1
.
In the case of 101 % 10
, if 101
is a binary number, its decimal equivalent is 5
, and 5 % 10
gives 5
.
So, it depends on whether you’re treating 101
as a binary number or a decimal number. In binary, 101
is 5
in decimal, and in decimal, 101
is 101
. Therefore, 101 % 10
is 1
if 101
is treated as a decimal number, and 5
if it’s treated as a binary number.
Remember that in most programming languages, numbers are interpreted as decimal by default unless specified otherwise. If you want to interpret a number as binary in your code, you’ll need to use the appropriate function or method to do so.
int count=0;
int val=n;
System.out.println(val);
while(val!=0) {
count+=(val&1);
val = val>>1;
System.out.println(val);
}
why the baove code is not stopping at 0.val becoming 0 but its not getting out of while loop?
The code you’ve posted should indeed exit the while
loop once val
reaches 0. The while(val!=0)
condition ensures that the loop continues as long as val
is not equal to 0. When val
becomes 0, the condition evaluates to false
, and the loop should exit.
If you’re finding that the loop doesn’t exit even when val
appears to be 0, one possibility is that val
is not exactly zero, but a very small number close to zero, due to some kind of rounding or precision error. This can happen in some circumstances when working with floatingpoint numbers, but it shouldn’t happen with integer values, as in your code.
Another possibility is that val
is a negative number. If val
is negative and you’re right shifting using >>
, the result is implementationdefined in some languages, and it could result in a negative value. In Java, >>
is a signed right shift operator, so if val
is negative, it will remain negative after the shift, and the loop will continue indefinitely.
To diagnose the problem, you could try printing out val
at each iteration of the loop to see its exact value. If it’s a very small number close to zero or a negative number, that could explain why the loop isn’t exiting.
If val
is indeed becoming negative due to the right shift, and if n
is supposed to be an unsigned value, you could use >>>
instead of >>
for the right shift. The >>>
operator in Java is an unsigned right shift operator, which will not preserve the sign bit and will not result in a negative value.
For the for loop here, it runs as many iterations as the log of the input. Is the time complexity of my code O(n) or, O(log n). I somehow remember I learned that the time complexity should be analysis in with number of bits of the input instead of the value of the input somewhere.
class Solution {
public:
int hammingWeight(uint32_t n) {
if(n==0)
return 0;
int k=log2(n);
int cnt=0;
for(int i=0;i<=k;i++)
{
cnt+=n&1;
n=n>>1;
}
return cnt;
}
};
The time complexity of your code is (O(\log n)), not (O(n)), because the number of iterations of your loop depends on the number of bits in n
, which is proportional to the logarithm of n
, not n
itself.
Here’s why: In binary representation, a number n
needs approximately (\log_2 n) bits. The log2(n)
function returns the base2 logarithm of n
, which is approximately the number of bits needed to represent n
in binary.
In your code, you’re looping from i = 0
to i = k
, where k = log2(n)
. This means you’re performing one iteration for each bit in n
. So, the number of iterations, and thus the time complexity, is proportional to the number of bits in n
, which is (O(\log n)).
Remember that in time complexity analysis, n
is the size of the input. In many problems, n
is the number of elements in an input list or array, but in problems involving binary operations, n
is often the number of bits in the input number. In this case, your input is a single number, and the size of the input is the number of bits in that number. Therefore, a loop that runs once for each bit in the input has a time complexity of (O(n)) when n
is the number of bits, or (O(\log n)) when n
is the value of the input number.
Why this is working with n%2 and not with n% 10 ? Can anybody please explain. Thanks int count = 0; while(n != 0){ uint digit = n % 2; if(digit == 1){ count++; } n = n/2; } return count;
This code is calculating the Hamming weight of a number, which is the number of ‘1’s in its binary representation. It does this by repeatedly dividing n
by 2 and checking if the remainder is 1. If the remainder is 1, that means the least significant bit (rightmost bit) in the binary representation of n
is 1, so it increments the counter.
When you use n % 2
, you’re getting the remainder when n
is divided by 2, which is equivalent to the least significant bit in the binary representation of n
.
However, when you use n % 10
, you’re getting the remainder when n
is divided by 10, which is not relevant to the binary representation of n
. In the decimal number system, n % 10
would give you the last digit of n
, but in the binary number system, every digit is either 0 or 1, so getting the last digit of n
in the decimal system doesn’t tell you anything about the number of ‘1’s in the binary representation.
That’s why the code works with n % 2
and not with n % 10
. n % 2
is a way to get the least significant bit of n
in binary, which is what you need to calculate the Hamming weight.
// THIS METHOD WILL RUN ONLY IF USER WILL ENTER INPUT IN BINARY FORM int n; int digit=0; int sum=0; while (n!=0) { digit=n%10; if (digit==1) { sum++; } n=n/10; } return sum;
kindly check this method the method is running successfully but the one issue i faced if i enter input of more than 18 digit in binary form give me the solution of this if anybody have
The code you’ve written is intended to calculate the number of ‘1’s in a binary number, assuming that the binary number is input as a decimal. For example, the binary number 101 would be input as the decimal number 101, not the decimal number 5.
However, this method has a limitation related to the maximum value that can be stored in an int
in most programming languages. In many languages including Java, C, and C++, an int
is 32 bits and can store values up to about 2 billion (2^31  1 for positive numbers). If you try to input a binary number with more than 18 digits, it will exceed this limit when it’s interpreted as a decimal number, and you’ll get an overflow.
To handle binary numbers with more than 18 digits, you can accept the input as a string instead of an integer. Then you can iterate over the characters in the string, counting the number of ‘1’s. This will allow you to handle binary numbers of any length, limited only by the maximum length of a string in your programming language.
Here is an example of how you might modify your code to accept a string input:


In this code, n
is a string, and n.charAt(i)
gets the i
th digit of the binary number. The comparison digit == '1'
checks whether this digit is ‘1’, and if so, increments sum
. The code then returns sum
, which is the count of ‘1’s in the binary number.
Can someone please tell me why this is not working in JavaScript ? I am using a initial mask 1 and then left shifting it. Thanks in advance
let count = 0
let mask = 1
for(let i=0;i<32;i++){
if((n & mask) == 1 ){
count++
}
mask <<= 1
}
return count
};
The issue with your code is in the conditional statement of your if
block.
The expression (n & mask) == 1
will not work as expected because the result of the bitwise AND operation (n & mask)
is not always 1
when a ‘1’ bit is found. It could be 1
, 2
, 4
, 8
, etc., depending on the position of the ‘1’ bit.
You need to check if the result of (n & mask)
is not 0
, which would mean that a ‘1’ bit is found in the current position.
Here’s the corrected code:


In this version of the code, if((n & mask) !== 0 )
checks if the result of (n & mask)
is not 0
, and increments the count if it’s not. This will correctly count the number of ‘1’ bits in n
.
Can anyone tell me where is wrong in my code? Thank you!
class Solution(object): def hammingWeight(self, n): count=0 res = [int(x) for x in str(n)] "”” :type n: int :rtype: int "”” for digit in res: if digit == 1: count+=1; return count
Your current code is treating n
as a decimal number, converting it into a string, and then counting the number of ‘1’s in the decimal representation of the number. However, the problem requires counting the number of ‘1’s in the binary representation of the number.
To fix this, you should convert n
to its binary representation using the bin
function and then count the number of ‘1’s. Here’s how you can modify your code:


In this version of the code, bin(n)[2:]
converts n
into a binary string and removes the ‘0b’ prefix that bin
adds. The rest of the code is the same as before, but now it’s correctly counting the number of ‘1’s in the binary representation of n
.
Could anyone explain this line mask «= 1; means? Thank you.
The <<=
operator in many programming languages, including C++, Java, and Python, is a bitwise shift assignment operator. It works as follows:
The
<<
part of the operator is a left shift. This means that the binary representation of the variable on the left side of the operator is shifted to the left by the number of places specified on the right side. For example, if you have the binary number 0011 (which is 3 in decimal), and you left shift it by 1 place (0011 « 1), you get 0110 (which is 6 in decimal). Each shift to the left effectively multiplies the number by 2.The
=
part of the operator is an assignment. It assigns the result of the left shift back to the variable on the left side.
So, the line mask <<= 1;
means “shift the binary representation of mask
one place to the left, and assign the result back to mask
”. This effectively multiplies mask
by 2.
For example, if mask
was initially 1 (which is 0001 in binary), after mask <<= 1;
, mask
would be 2 (which is 0010 in binary).
I am getting time limit exceeded on this, but passes a few test cases. Is the modulo operation very expensive? My while loop still runs through all 32 bits similar to approach 1.


The modulo operation itself is not particularly expensive, but it can be slower than bitwise operations. However, in your case, I don’t think the modulo operation is the cause of the time limit exceeded error.
The issue here seems to be with the line n = n >> 1;
. In Java, the >>
operator is a signed right shift, which means it maintains the sign of the number. If n
is negative, the most significant bit (the sign bit) is 1, and using >>
will shift in more 1 bits at the left, effectively creating an infinite loop.
For unsigned right shifts, Java provides the >>>
operator, which doesn’t preserve the sign bit and shifts in zeros at the left regardless of the original sign of the number.
So, replacing >>
with >>>
should resolve the time limit exceeded issue:


This version of the code will handle both positive and negative values of n
correctly.
can someone tell me why if((n&1)==1) bits++; doesn’t work in the first approach?
The expression (n & 1) == 1
checks if the least significant bit (i.e., the rightmost bit) of n
is 1
. If it is, it increments bits
. This is a common way to count the number of ‘1’ bits in the binary representation of a number.
However, it’s not enough to just check the least significant bit once. You need to shift n
to the right at each step to check all of its bits. This can be done using a loop or a similar construct.
Here’s an example of how you can modify your code to correctly count the number of ‘1’ bits:


In this version of the code, the while(n != 0)
loop runs as long as n
is not zero. On each iteration, it checks the least significant bit of n
and then shifts n
one bit to the right. This allows it to count all of the ‘1’ bits in n
.
Remember that in some languages, right shift (>>
) on a negative number is a signed shift, which preserves the sign of the number. If n
is negative, a signed right shift could result in an infinite loop. Depending on the language and the expected range of n
, you may need to use an unsigned right shift if it’s available, or use other means to ensure n
becomes zero after sufficient shifts.
My Java solution:


Is it a good idea to call Integer.toBinaryString()? The solution is accepted but not optimized, would like to understand why?
The Integer.toBinaryString(n)
method converts an integer n
to a binary string. This is a straightforward way to get the binary representation of a number, and it makes the problem of counting ‘1’ bits easy to understand.
However, it’s not the most efficient way to solve the problem. The reason is that creating a new string requires additional memory, and traversing the string to count ‘1’ characters requires additional time. This makes the time complexity of your solution (O(m)), where (m) is the number of bits in the number (typically 32 for an int), and the space complexity (O(m)) due to the binary string.
An optimized approach would avoid the need to create a binary string and would count the ‘1’ bits directly in the binary representation of the number. This can be done using bitwise operations, which are faster and more memoryefficient. Here’s an example:


This version of the code has a time complexity of (O(1)) (since the number of bits in an int is a constant) and a space complexity of (O(1)) (since it uses a fixed amount of memory regardless of the size of the input). It’s more efficient than the version that uses Integer.toBinaryString(n)
, but it may be slightly harder to understand if you’re not familiar with bitwise operations.
This solution throws TLE


Shouldn’t it be faster than Approach #1 since this only iterates over the actual number of bits?
Your code correctly counts the number of ones in the binary representation of the number n
by continuously dividing n
by 2 (equivalent to right shifting n
by 1) until n
becomes 0.
However, if n
is negative, the code enters an infinite loop. The >>
operator in Java is a signed right shift. When applied to negative numbers, it shifts the bits to the right while preserving the sign bit (the leftmost bit), which remains 1. Therefore, n
never becomes 0 and the loop continues indefinitely.
To fix this issue, you can use the >>>
operator instead of >>
. The >>>
operator is an unsigned right shift operator that does not preserve the sign bit and shifts in zeros at the left regardless of the original sign of the number.
Here’s the corrected code:


This version of the code will handle both positive and negative values of n
correctly, and it should not result in a time limit exceeded error.
I am new to programming. Could someone please tell me if this solution is acceptable according to the standards?


Your solution is correct, and for a beginner, it’s a very good start! It converts the integer to a binary string and then counts the number of ‘1’ characters in that string. This is a straightforward and intuitive approach to the problem.
However, it’s worth noting a few points that could help you write even better code in the future:
Efficiency: Your code creates a new string and then iterates over it. This takes extra time and memory. More efficient solutions would use bitwise operations to count the ‘1’ bits directly in the binary representation of the number, without creating a string. However, if you’re a beginner and you’re not comfortable with bitwise operations yet, your approach is fine for now.
Naming: In professional programming, it’s important to choose clear, descriptive names for your variables. Instead of
bin
,sum
, andi
, you could use names likebinaryString
,numberOfOnes
, andindex
. This makes your code easier to read and understand.Looping: You’re using a
while
loop to iterate over the characters in the string. This is perfectly fine, but in Java, afor
loop is often more common for this kind of iteration. Here’s how you could write your loop with afor
loop:1 2 3 4 5
for (int index = 0; index < binaryString.length(); index++) { if (binaryString.charAt(index) == '1') { numberOfOnes++; } }
This code does exactly the same thing as your
while
loop, but it’s a bit shorter and clearer. The variableindex
is declared and incremented right in thefor
loop statement, which makes it easy to see what’s happening.
Again, your code is absolutely correct and it’s a great start. Keep practicing, and over time you’ll learn more about how to write efficient, clear, professionalquality code.
I submitted the following ts code and got a Time Limit Exceeded when input is 11111111111111111111111111111101.


If I change it to n = n / 2, it totally works.
Any comments why this happens?
The reason your code gets stuck in an infinite loop with the input 11111111111111111111111111111101
is because of the way JavaScript handles bitwise operations. In JavaScript, numbers are stored as 64bit floating point values, but bitwise operations are performed on 32bit binary numbers.
When you use the right shift operator (>>
), JavaScript converts the number to a signed 32bit integer, performs the shift, and then converts the number back to a 64bit floating point value. For negative numbers (and in two’s complement binary, 11111111111111111111111111111101
is a negative number), the >>
operator is a signpreserving shift. This means that it shifts in ones on the left, so the number never becomes zero and the loop never ends.
When you use division by 2 (n = n / 2
), JavaScript uses floatingpoint division, which does get closer to zero and eventually becomes zero due to the limits of floatingpoint precision. That’s why the loop does end when you use division by 2.
To fix the issue with your original code, you can use the unsigned right shift operator (>>>
) instead of the signed right shift operator (>>
). The unsigned right shift operator shifts in zeros on the left, regardless of the sign of the number.
Here’s the corrected code:


This version of the code will correctly count the number of ‘1’ bits in the binary representation of n
, even when n
is a negative number in two’s complement binary.
I am too bad at bit manipulation. Don’t know how and where to start from.
Bit manipulation can be a challenging topic when you’re just starting out, but with a little practice, it can become a powerful tool in your programming toolkit. Here’s a basic guide to get you started.
1. Understand the binary number system: Binary is a base2 number system, used extensively in digital systems. It consists of only two numbers: 0 and 1. Each digit in a binary number represents a power of 2.
2. Learn the basic bit manipulation operations:
 Bitwise AND (&): The result is 1 only if both bits are 1.
 Bitwise OR (): The result is 1 if at least one of the bits is 1.
 Bitwise XOR (^): The result is 1 only if the two bits are different.
 Bitwise NOT (~): It flips the bits of the number.
 Left shift («): Shifts the bits of the number to the left and fills 0 on voids left as a result. Equivalent to multiplying the number by 2.
 Right shift (»): Shifts the bits of the number to the right and fills 0 on voids left as a result. Equivalent to dividing the number by 2.
3. Practice problems: Start with simple problems that involve bit manipulation and gradually move to more complex ones.
4. Understand use cases: Bit manipulation is used in a variety of situations, such as optimizing certain algorithms, solving complex mathematical problems, data compression and encryption, graphics and image processing, and more. Understanding where and why you might use bit manipulation can help you grasp the concepts more effectively.
5. Keep learning and practicing: Bit manipulation is a broad topic with many advanced techniques. Keep learning new concepts and practicing them regularly. Over time, you’ll become more comfortable with bit manipulation and be able to use it effectively in your programming.
Remember that it’s okay to find this topic challenging at first. With consistent practice and study, you’ll get the hang of it!
Follow up: If this function is called many times, how would you optimize it? I think we can use a hashmap to maintain the numbers whose numBits have been queried so far. So, with every check we either return the saved result in the map or continue to count the numBits if the number doesn’t have an entry in the map.
The followup question asks how you would optimize the function if it was called many times. One approach is to use a technique called dynamic programming. Here, you would precompute the number of ‘1’ bits for all integers in a certain range, and then use this information to more quickly calculate the Hamming weight when the function is called.
If we know that the function will be called for 32bit integers, then there are 4,294,967,296 possible inputs. Precomputing for all these possibilities may be inefficient and consume a lot of memory.
Instead, we can precompute the Hamming weight for all 16bit numbers (from 0 to 65535), which only require 65536 spaces. Then, for a 32bit integer, we can split it into two 16bit parts, and look up the Hamming weight for each part in our precomputed table. This way, we only need two lookups, regardless of how many ‘1’ bits the number has.
Here’s how you can implement this in Python:


In this code, self.memo
stores the number of ‘1’ bits for all 16bit numbers. The hammingWeight
function then uses bit manipulation to split the 32bit input into two 16bit parts, and finds the total number of ‘1’ bits by looking up the precomputed values in self.memo
.
This approach speeds up the function significantly when it is called many times, because the number of ‘1’ bits for each 16bit part is computed only once and reused later.
in javascript… i did like this…any comments…?


Your JavaScript solution is correct! You’re taking the given number n
, converting it to a string representation in binary, then using the match
method to find all ‘1’ bits in the string. Finally, you return the length of the match array, which gives you the number of ‘1’ bits.
One thing to note here is the performance. String operations can be slower than bitwise operations. If the performance is a concern (for example, if hammingWeight
is called frequently with large inputs), you might want to consider using bitwise operations to solve this problem.
Here’s an example of using bitwise operations to solve this problem:


In this code, a mask is initially set to 1 and then left shifted on each iteration. n & mask
checks the bit at the mask’s position. If the result is not zero, that means the bit at that position is 1, and we increment the count of bits.
However, if you’re not dealing with performancecritical scenarios, your initial solution should work just fine. It’s quite readable and directly solves the problem.
A number of people commented that this does not take into account the number of bits. That is true but the problem explicitly states it is a 32bit number so the analysis is correct that time is constant bounded so O(1). Note the analysis makes this point, that solution complexity is based on number of bits but number of bits is fixed. However I do think it is worth considering how to solve the problem in the face of potentially changing requirements. For example what happens if number of bits is now 64, how does that effect time taken? In this case stating a complexity in terms of bits is helpful, particularly as the second solution is O(#bits set) while the first solution is O(bits). A third solution not mentioned above is to use a lookup table to compute number of bits in chunks. Here the bit based complexity helps because now complexity is measured in number of chunks needed to be examined. To conclude, strictly speaking this is O(1) solution, but when considering possible extensions to the problem then giving complexity in terms of number of bits is worthwhile, particularly if it gets extended to counting bits in a PKI solution with 2,048 bit keys for example.
This comment provides a thoughtful discussion on the time complexity analysis of the “number of 1 bits” problem.
In the original problem statement, it’s mentioned that the number under consideration is a 32bit integer. Therefore, the time complexity is O(1) because the number of operations is constant, irrespective of the size of the input number.
However, the commenter makes a crucial point: if we need to adjust the solution for larger bit numbers (like 64bit integers or 2,048bit keys used in public key infrastructure), then it is beneficial to express the time complexity in terms of the number of bits.
The first solution mentioned has a time complexity of O(bits), because it checks each bit of the number, regardless of whether it’s a 1 or a 0. The second solution, however, has a time complexity of O(#bits set) because it only operates on the bits that are set to 1.
Finally, the commenter introduces a third solution, which involves using a lookup table to count the number of bits in chunks. Here, the complexity would be proportional to the number of chunks needed to be examined.
In summary, while the solution to the initial problem is technically O(1) due to the fixed 32bit number, expressing the complexity in terms of the number of bits is more useful and adaptable for potential variations of the problem.
( used JavaScript )
I wonder, why It can’t be solved by converting the input number to octagonal Number and then to the string. So, we can just count the nulls inside the string.
But, for some reason, I am seeing, that LeetCode JS output gives me 00000000000000000000000000001011.toString(8) // ‘13’
that result. Meanwhile, JS on both client & server gives the correct answer 00000000000000000000000000001011.toString(8) // ‘1011’
Does anyone know, for what this can be related to?
The discrepancy you are observing seems to be due to the way you are using the toString()
method in JavaScript.
Let’s break down the expression 00000000000000000000000000001011.toString(8)
:
00000000000000000000000000001011
is a binary literal in JavaScript. However, JavaScript doesn’t support binary literals like this. It supports binary literals with the0b
or0B
prefix. So, JavaScript interprets00000000000000000000000000001011
as a decimal number.toString(8)
converts the given number to base 8 (octal).
Now, when JavaScript sees 00000000000000000000000000001011.toString(8)
, it does not interpret 00000000000000000000000000001011
as a binary number but as a decimal number. The decimal equivalent of 00000000000000000000000000001011
is 1011
. Therefore, when you convert 1011
from decimal to octal, you get 13
.
If you want to convert a binary number to an octal string in JavaScript, you can do it like this:


Here, parseInt('00000000000000000000000000001011', 2)
converts the binary string to a decimal number, and then .toString(8)
converts that decimal number to an octal string. This should give you the correct answer.
C++ clean and accepted code :


Note: This is a very common bit manipulation technique as it allows to check set bits only. It is also used in NQueen problem during optimization.
This C++ solution indeed elegantly solves the problem of counting the number of ‘1’ bits (also known as the Hamming weight) in an unsigned integer. Let’s break it down:
The method
hammingWeight
accepts an unsigned integern
as its parameter.A counter
ans
is initialized to keep track of the number of ‘1’ bits.The
while(n)
loop continues as long asn
is not zero.Inside the loop,
ans
is incremented by 1 because we know thatn
is not zero, which means it has at least one ‘1’ bit.The expression
n &= (n1);
is a clever bit manipulation trick. The&
operator performs a bitwise AND operation.n1
flips all the bits after the rightmost ‘1’ bit inn
to ‘1’. Therefore,n & (n1)
effectively removes the rightmost ‘1’ bit fromn
.This process continues until all ‘1’ bits are removed from
n
, at which pointn
becomes zero, the loop ends, andans
is returned.
This technique efficiently counts the number of ‘1’ bits because it only operates on those bits that are ‘1’, skipping over the ‘0’ bits. As a result, the time complexity of this algorithm is proportional to the number of ‘1’ bits, not the total number of bits.
As you’ve noted, this bit manipulation technique can be used in a variety of problems, including the NQueens problem, where it is used to quickly check if a position is under attack by any previously placed queen.
why doesnt this work? class Solution: def hammingWeight(self, n: int) > int: n=str(n) return n.count(‘1’)
The approach you’re trying doesn’t work because you’re converting the integer to a string directly. When you do this, the integer is being converted to a decimal representation, not a binary one. The function str(n)
converts n
into a string that represents the decimal (base 10) number.
For example, if your input integer n
is 11
, str(n)
will give you '11'
, and counting '1'
in this string will give you 2
, even though 11
in binary representation is '1011'
, which contains three '1'
s.
To correct your approach, you need to first convert the integer into a binary string. In Python, you can do this using the bin
function. Here’s how:


This will give you the correct answer because the bin
function converts the integer to a binary string. Note that the bin
function adds a '0b'
prefix to the binary string, but this doesn’t affect the count of '1'
s.
Can anyone suggest why it runs into infinite loop when I replace while (n != 0) with while (n > 0)
The while (n > 0)
loop may run into an infinite loop due to the nature of binary arithmetic and how unsigned integers are represented in computer systems.
The key lies in understanding the bitwise AND operation (&
) and how negative numbers are represented in binary form (two’s complement).
When n
becomes negative in the loop due to an operation, the condition n > 0
would not be met, and the loop would normally exit. However, because n
is declared as uint32_t
(an unsigned 32bit integer), it cannot take negative values. A negative value would cause an underflow and wrap around to a large positive value. As a result, the loop continues indefinitely because the condition n > 0
remains true.
On the other hand, the condition n != 0
is true for both positive and negative values of n
(or rather, the large positive values that n
wraps around to in the case of underflow), and so the loop will eventually exit when n
is reduced to 0
through the bitwise operations inside the loop.
How is it that uint32_t n is being interpreted as binary and not in decimal? Looking at Approach #2, the bit mask only works if 10001 is interpreted to mean 0111 instead of 999. I know I’ve used uint32_t for decimal before so I’m confused.
The type uint32_t
is just a way to define an unsigned 32bit integer. Now, how you interpret the bits of that integer is up to you.
When we talk about binary, decimal, octal, or hexadecimal, these are all just representations of the underlying bits. A uint32_t
number like n
stores 32 bits of information. For instance, if n
is equal to 9, then in binary it’s represented as 1001
, in decimal as 9
, and in hexadecimal as 9
. These are different ways of representing the same value.
Now, in this problem when we talk about the binary representation, we are interested in the individual bits of n
. When you see 10001
in the context of binary numbers, it indeed means 0111
, not 999
. This is because 1000
is interpreted as a binary number (8 in decimal) and not a decimal number.
So, to clarify, uint32_t n
is not being interpreted as binary specifically. It’s just that we are operating on the level of individual bits in this problem. This is why we often talk about the binary representation, because it makes it clear which bits we are manipulating.
I have a question which is probably pretty stupid: Can someone explain to me when the data type is int, why is the input 11111111111111111111111111111101 treated as a binary and interpreted as 3? why isn’t it treated as an integer 11111111111111111111111111111101?
That’s actually a very good question! It has to do with how the binary number system and the representation of negative numbers work in computer science.
The input 11111111111111111111111111111101
is being represented as a binary number. This binary number is composed of 32 bits, which is the standard size for integers in many programming languages. The leftmost bit in a binary number is often used as the sign bit. If this bit is 1
, it indicates a negative number, and if it is 0
, it indicates a positive number.
The number 11111111111111111111111111111101
in binary has its sign bit as 1
, meaning it’s a negative number. The rest of the number (all the bits except the sign bit) represents the magnitude of the number.
However, the representation isn’t as straightforward as just considering the rest of the bits as the magnitude. Negative numbers are typically represented in computers using a scheme known as two’s complement. In two’s complement, a negative number is formed by inverting all the bits of its positive counterpart and then adding 1 to the result.
So, to represent 3
in two’s complement binary form:
 We first represent
+3
in binary:00000000000000000000000000000011
.  Then we invert all the bits:
11111111111111111111111111111100
.  Finally, we add 1:
11111111111111111111111111111101
.
So, the binary number 11111111111111111111111111111101
is interpreted as 3
when using the two’s complement representation for negative numbers. It’s not treated as a positive integer 11111111111111111111111111111101
because of the leading 1
sign bit and the conventions of two’s complement.
I am a begginer i have never done a question on bit manipulation before my code is full of bugs can any one tell what I am missing out or a wrong concept I am using here. int hammingWeight(uint32_t n) { int k=0; while(n) { if(n1){k++};//checking if lsb is 1 from 1; n=n»1;//dropping lsb }
return k;
} Thank You
You are very close! The issue lies with how you’re checking if the least significant bit (LSB) is 1. The 
operator is a bitwise OR operator, which will return true as long as either of the operands has a bit set. This means that n  1
will always be true if n
is not zero, regardless of whether the LSB of n
is 1 or not.
Instead, you should use the bitwise AND operator &
, which returns true only if the corresponding bits in both operands are 1.
Here’s the corrected code:


In this version of the code, n & 1
will be true if and only if the least significant bit of n
is 1. That’s because the binary representation of 1 is, well, 1, and the bitwise AND operation with 1 will yield 1 if the LSB of n
is 1 and 0 otherwise.
what’s wrong here ?
Python3 similar solution worked for me. Check the last bit and right shift the num by 1 bit.
public class Solution { // you need to treat n as an unsigned value public int hammingWeight(int n) { int count = 0 ; while(n>0) { if((n&1) == 1) { count += 1 ; } n = n » 1 ; } return(count) ;
} }
JAVA one fails
Your Java solution is not working because Java doesn’t have unsigned integers, and the right shift operator >>
in Java is a signed shift. This means when n
is negative, n >> 1
will fill the leading bit with 1 (since it preserves the sign), causing an infinite loop.
To fix this issue, you should use the unsigned right shift operator >>>
which always fills the leading bit with 0.
Here is the correct Java code:


In this corrected code, n >>> 1
means shifting n
one bit to the right and filling the leading bit with 0, effectively treating n
as an unsigned integer. So the loop now terminates correctly when all bits of n
have been processed.
I did not understand how the Time complexity is O(1) for the first solution. It will be great if somebody can let me know.
The time complexity of the first solution is often expressed as O(1) because the number of iterations is capped at a constant value, which is the size of the data type used (in this case, a 32bit integer).
However, it’s important to understand that the O(1) complexity here is a bit of a simplification. If we were to consider the bitsize of the numbers as a variable (which we generally don’t in problems like this because it’s fixed), then the time complexity would actually be O(b), where b is the number of bits in the number.
So why do we say O(1) instead of O(b)? Because in the context of this problem, b is a constant (32), and any constant factor in BigO notation can be dropped, so O(32) simplifies to O(1). This means the algorithm runs in constant time, regardless of the specific value of the input number.
However, if you were dealing with numbers with an arbitrary number of bits, then you would have to consider the time complexity as O(b), as the time it takes would scale linearly with the number of bits in the input. This is not the case for this particular problem as we’re dealing specifically with 32bit integers, hence the O(1) time complexity.
how did you know it was 32 and not 64 bit?
The number of bits in an integer in a programming language usually depends on the specific data type used to declare it.
In the problem “Number of 1 Bits” that we’ve been discussing, the input integer n
is declared with the type uint32_t
. The 32
in uint32_t
indicates that n
is a 32bit unsigned integer. Therefore, we know that the input is a 32bit number.
It’s important to understand that the exact size of the integer types can vary depending on the programming language and the system architecture. In most languages and architectures, the standard int
type is 32 bits on a 32bit system and 64 bits on a 64bit system. Some languages, like Python, have arbitrarylength integers, while others, like Java, have fixed sizes independent of the architecture.
For languages that have specific data types for different bitlengths (like uint32_t
, uint64_t
in C++), you can tell the bitlength directly from the data type.
But if you are uncertain, it’s best to refer to the language’s documentation or the specific problem statement to clarify the bit size. For instance, the LeetCode problem explicitly states that the function takes a 32bit integer as input.
It would be better if the problem was explicit about the input being a 32 bit integer instead of having to infer that by the examples.