Broadly speaking, both knowledge and imagination are types of intelligence. Intelligence is nothing more than the ability to store and process information. You might add "create information" to this list, which seems to be what artists and other imaginative or creative people do, but this is precisely the point: The mind only creates ideas based on what it has experienced, what it has already known and felt. The mind cannot create something without a frame of reference, and so the act of creating art is really just a subset of processing information: Creativity is like a computer function in which the input is everything the mind has learned, and the output is whatever art gets produced by the artist.
Intelligence, by itself, cannot change anything, because intelligence only handles information. For that intelligence and information to have any effect, it must be converted into real-world events. This is why a computer, by itself, is completely powerless: A computer can do nothing but store and process information. In many science-fiction movies, there are ideas about computers which take over the world by launching nuclear missiles or controlling huge armies of killer robots, but these things are only possible if there is some link between the computer and these physical objects. A computer can only control something which has a specially-made interface for that computer.
For example, you have probably seen 3D printers which can produce various plastic objects. These printers are controlled by a computer, and so you can, through a computer, tell a 3D printer to print, for example, a cup or a plate. What's important to understand here is that the computer is not capable of printing anything; the only way it can tell the printer to print anything is through an electronic cable which connects the two devices. (Well, okay, it's possible to create a wireless link between the devices, but the idea is the same: The two devices need to be programmed to communicate with each other.) This may seem "obvious", but many people lose sight of this fact because they imagine a world in which some central computer controls everything, even though this is not possible because a computer cannot control anything at all unless some physical link exists which allows the computer to send electronic signals to devices, and even then, events can only happen when those devices understand and obey the computer. If a device were to refuse an order from a computer, for whatever reason, the computer is helpless to enforce compliance.
In this regard, human intelligence works the same way as computer intelligence. You can learn a lot of things, understand a lot of things, and imagine a lot of things, but all of this is just information, and that information, by itself, doesn't do anything. There are also science-fiction scenarios in which people develop hyper-intelligence and are somehow, through their intelligence, capable of things like telekinesis or telepyrosis (often incorrectly called "pyrokinesis"), which is absolute nonsense because intelligence alone cannot cause things to happen. The only way telekinesis or telepyrosis could be possible would be if there were some physical link from a person's brain to physical objects; intelligence alone won't make it happen. Imagining stuff like this might be fun (see here for an insane list of literally hundreds of fictional "-kinesis" abilities that people have come up with), but this is just fictional entertainment.
So an intelligence device, whether it's a silicon computer or an organic brain, can only process information but can't actually make things happen. It's dependent on external devices, and connections to those devices, to be able to do anything. The devices which are controlled by the computer or brain must be both willing and able to do what the controller wants, or else the desired result will not be achieved. In thinking about this, I realized that this same holds true for language, both human languages like English or French and computer languages like C or Java. A language, by itself, cannot do anything. Language is a way for one "thinking device" to communicate with another thinking device, but it depends on both devices being able to process a specific set of symbols. In most human languages as well as in most high-level computer languages, these symbols typically take the form of words.
At some point in your life, you've probably hit upon a concept which you wanted to express or communicate somehow, but for which you lacked appropriate words. Language is finite; there are only so many words that exist in any language, and the words which exist in any language are a reflection of what that language's culture has experienced. A word is coined when someone in that culture has an idea which they want to attach a word to. Until then, there is no word for that idea, and so expressing that idea becomes difficult. Many languages have words which exist only in that language and which cannot be easily translated into any other language. If you want to express an idea for which there is no word in the language you're using, you have to be indirect and use a phrase instead: If you're writing in a language which has no word for "airplane", you might use the phrase "flying machine" instead.
The same applies to computer languages: A computer language has a finite library of functions built into it, and you can only use that language to do whatever its vocabulary enables you to do. If you want to do something for which there is no built-in function, you may be able to turn a word into a phrase. For example, many early microprocessors included built-in functions to add numbers, but not to multiply. If you wanted to multiply, you had to reformulate the multiplication as repeated addition: 4 times 8 could not be expressed as 4 times 8, but rather as 8 plus 8 plus 8 plus 8. More advanced math functions were even more elaborate: The Commodore 64 had a BASIC interpreter built in which included functions to perform trigonometry operations like calculating sines, cosines, and tangents, but these functions were not built into the CPU, and so the Commodore 64 could only calculate these things based on long lists of instructions which took a relatively long time to process. Even today's newest CPUs generally lack instructions to perform advanced mathematics like calculus-level differentiation or integration, and so software with fairly complex logic consisting of many instructions must be written to do these things indirectly.
Even if you can perfectly formulate and express an idea into language, language is a form of communication, and for any communication to work, it's necessary not only that the "speaker" or "sender" transmits information correctly, but also that the "listeners" or "receivers" receive and understand the information correctly. Language is complex, and there are many possible reasons why a person or an electronic device might misunderstand communication. Fundamentally, though, language is still dependent on a vocabulary, a set of words or other symbols which the language makes available to people who want to communicate in that language. If the people or devices involved in that communication don't understand specific symbols, or if they lack the necessary symbols to express what they want to express, language stops being useful.
Within the context of the limitations of intelligence and language, perhaps the most deluded person is the software developer who knows how to write code and believes that this ability makes them omnipotent. I see countless people who really seem to believe that because they learned how to program and then started looking at some software code, this means they can do anything because as long as they write the correct code, they can achieve whatever they want. These people are failing to understand the fundamental limitations of both intelligence and language: They don't understand the limitations of a programming language because any language is only capable of doing what its vocabulary provides to you. In high-level languages today, you usually cannot communicate directly with the hardware, which means that the language specifically lacks functions to enter information directly into memory or specific input/output ports. And even if you write in assembly language, assembly language also does not have infinitely many instructions; a CPU has a finite list of instructions it can understand and run, and if you want to do something which cannot be expressed in that set of symbols, you can't do what you want with the language. Meanwhile, software developers also fail to understand the fundamental limitations of the computer itself: The computer can only process information, nothing more. The computer is very smart, but if you want to turn the computer's software into any kind of real-world physical effects, this can only be done if the computer has a link to a person or device which is willing to obey instructions from that computer. If the computer does not have anyone listening to it who is willing to obey it, it can do nothing except think.
By no means do I mean to say that intelligence is useless. Far from it. But people need to understand its limitations. You may be very smart, but if that's all you are, then that doesn't really count for anything. Intelligence needs to be combined with something else to make it meaningful or useful. With all due respect to Einstein, if all you know how to do is imagine, all you're going to have is a head full of dreams which you can neither communicate nor turn into reality.