Is it possible to mess up the programming in a universal remote?

How has your preference for programming languages evolved over time?

  • Maybe you used to be a huge static-typing nut and eventually learned to accept dynamic languages. Maybe you're the other way around. Maybe you used to fear functional languages, but eventually you learned monads and became a Haskell master. Or maybe you used to hate javascript but eventually embraced it so you could do web programming. Or maybe you used love C++ but decided it's too much of a mess, or maybe you used to hate C++ until you decided you wanted to do your own memory management and get good performance. What's your story? What factors influenced your change of perception?

  • Answer:

    The Early Years I began with Perl, because my arrogant, smartass, doe-eyed, naive younger self thought rules were for losers, and I wanted to write code like a megaleet superhacker. Perl was the "swiss army chainsaw," and I believed it was the only language I would ever need. I knew small amounts of C++ and Java, but not enough to write anything meaningful. That didn't bother me, because I knew they were for cubicle monkeys with tiny brains. I had heard of C, but figured C++ was basically the latest version of C. I learned a bit about Python from my brother (about Python; not to be confused with learning a bit of Python), but instinctively hated it for being too hand-holdey. I was particularly put off by Python's "there is exactly one correct way to do it" motto, which directly opposed my beloved Perl's http://en.wikipedia.org/wiki/TMTOWTDI. I didn't know a thing about functional languages, much less any programming paradigm outside of my little imperative world of scripting and OO languages. I didn't really know how to use third-party libraries in Perl, and I was more or less confined to single-file scripts, but I figured I knew enough to do pretty much anything. Looking back, I'd rate my competence somewhere between a third to a half of an entry-level software developer. If I were to evaluate this past version of myself for a paid internship, I would reject him 10 minutes into the interview. After tooling around with Perl for most of a year, I got an unpaid internship at a graduate school, where the hiring criteria were basically "hire anyone willing to learn, regardless of skill, as an unpaid intern". Thankfully, that work turned out to be primarily http://en.wikipedia.org/wiki/Natural_language_processing, which Perl lends itself very well to, so I shined brightly. Just a few months later, I was offered a full-time research position, which I would hold for the next four years. (It was more than the fact that I happened to know some Perl; I was also a clever and crafty little shit.) Within the first few months of using Perl for a serious purpose, I managed to outsmart the ever-loving Christ out of myself. I'd work on some complicated program, set it aside for two or three weeks, and then get bewildered and frustrated by my own super-clever, uncommented Perl. It was an order of magnitude faster to simply rewrite programs from scratch than to understand my own code. And that's when the importance of readable, structured code began to dawn on me. My lab also used a lot of Python, but I had been avoiding it up to this point out of misplaced and totally uninformed hatred. Begrudgingly, I started teaching myself some Python. The more I learned and practiced Python, the more I liked it. It was a humbling experience, as I gradually became more aware of the depths of my ignorance, but Python was patient with me, and I developed a deep appreciation for its simplicity and accessibility to newbies. I realized that I did need to be led by hand through new and increasingly complex programming concepts, and Python was my wise and compassionate sensei. At this stage, my thoughts on various programming languages were roughly as follows: Perl: "I should get better at this. As I master concepts in Python, I'll go back and learn them the Perl way." Python: "A vehicle for building up my skills and learning new concepts. Perl remains the One True Path; Python will accelerate my progress along it." PHP: "Perl for the web -- awesome! I should pick this up at some point." C++, Java: "These obsolete languages are for cubicle monkeys, and should be ignored." Better, Faster, Stronger In the research lab, I found myself needing to write high-performance code increasingly often, and Python was no longer cutting it. By this point, I understood what "low-level" and "high-level" referred to in the context of programming languages, and I knew what the tradeoffs of moving lower were. I had already taken an introductory C++ course at a community college when I was 12, and I had also recently breezed through another at the university I was attending, so I decided to dive into C++. It didn't take long to become thoroughly sickened by C++. The STL in particular proved to be an inelegant, overcomplicated mess to work with. I hated everything about the usual code, compile, debug cycle. In learning about C++, I got a better understanding of how it related to C. Seeing that all the things I hated about C++ were C++-specific constructs crudely bolted onto a much simpler language, I decided to go back even further and teach myself C, instead. It was in C that I found exactly what I expected and was hoping for in a low-level programming language: I accepted the manual memory management and lack of built-in data structures as a cost of doing business, and the tremendous gains in performance took my breath away. In a frenzy, I rewrote my collection of Python tools in C and stared in wonder at the little reports I generated that compared their performance. I scoured the Internet for more information on high-performance computing, discovered and learned how to use profilers, and continuously tweaked and optimized my code whenever I got the chance. I became a full-blown performance junkie. Now try to imagine my elation at discovering that my lab had a supercomputer at its disposal. My brain just about leapt out of my skull. I attended information sessions on campus about how to use it and the rules to abide by, and pushed its TechOps team into long conversations in person and over email about how it was put together, how the scheduler worked, and how to make the best use of it. Eventually, I was offered administrator powers and duties for the supercomputer, but it was decided that that'd be too much of a distraction from the research I was being paid to do. Around this time, my usual duties were to assist grad students in the AI lab with running their experiments on the supercomputer. I would coordinate with them on the data, the methods, and the results they wanted, write and tweak the C code that made it happen, and obsessively monitored their progress on the supercomputer. I remember watching the status of specific nodes like a vulture and scooping them up for my use the moment they became available. Those were happy times. Throughout this time, I looked in more detail at the "older" languages to see what else I was missing, and learned a great deal about the development of new languages between the 50's and the 80's. I made some cursory investigations into languages like Fortran, Cobol, Algol, Simula, and Ada, but I put enough of a value on practical concerns such as modern-day use and active development that I thought it'd be a better use of my time to continue exploring, i.e. expanding the breadth of my knowledge, in search of interesting things to learn in depth. Those topics tended to be about performance, e.g. assembly code, computer architecture, and distributed computing. I also learned a bit more about modern languages, such as Java and C#, enough to write very basic programs in them. In particular, Ruby struck my fancy as a nifty fusion of Python and Perl. I knew of course that it was abominably slow, but I had the experience to know that I often enough didn't need extreme performance. And so I decided that I would master C and Ruby, using C when performance mattered and Ruby when it didn't, but try to maintain some competence in other languages in case they mattered in ways I didn't understand yet. I ended up paying far more attention to C than to Ruby, and so my Python-fu still prevails over my Ruby and even Perl skills (which have by now heavily deteriorated). At this stage, my thoughts on various programming languages were roughly as follows: Perl: "I've stopped returning its phone calls." Python: "A solid general-purpose language, until extreme performance is necessary." Ruby: "Python, but slicker. Learn more." PHP: "Sounds like Perl for writing web apps. I still don't know much about it, but I'm wary of it." C: "Ugly as sin, but beautifully performant. It doesn't need to look pretty; only the benchmarks do." C++: "Like C, but uglier and more frustrating to work with. Just use C." Java: "It's like C++ got its act together. Accomplishes what C++ was going for much more cleanly, but I don't care much for what C++ was going for, so I don't care about Java. A lot of software companies seem to use it, though, so I might need to be able to use it to get a job." C#: "Microsoft Java. Good to know that it exists, but I still can't bring myself to care." Enlightenment As I learned more about the history of computing and the evolution of programming paradigms and languages, I came into contact with some real oddballs. I didn't really know what to make of them at first. Initially, I more or less dismissed them as highly experimental / proof-of-concept / mental masturbation / toy languages. I had a sort of passive academic interest in what on Earth it was they were trying to prove or accomplish, but never put any real priority on finding that out. Then I encountered Lisp. And as anyone who has ever put "lisp" into an online search knows, you can't read about Lisp online without seeing a metric buttload of hipster wankery about the elegance and beauty and perfection of Lisp. Curious as to what all the fuss was about, I decided I wanted to learn some Lisp. ...and I immediately ran into a brick wall. Anyone who has tried to independently learn Lisp from online sources knows exactly what I'm talking about. It was excruciatingly difficult to find a starting point. I didn't know which incompatible implementation to start with, or whether I should just read the ANSI standard that nobody obeys, and every forum post I read and IRC conversation I started exploded immediately into a flamewar. Questions were answered with questions I didn't know the answer to. I tried following along with some books, but half of them were outdated and the others didn't give enough attention to practical concerns like getting some gorram input or how programs were supposed to be actually run from a shell, which was important to me because I learn best by testing things constantly as I read through learning material. Thankfully, I had by that point seen Scheme mentioned often enough. I downloaded a copy of http://en.wikipedia.org/wiki/Structure_and_Interpretation_of_Computer_Programs, got literally the first Scheme implementation that I saw in a Google search (I think it was guile), and got down to business. As I read through SICP, I of course attempted the problem sets with Scheme, but I also tried with imperative languages I knew as well, and it was this that helped me realize the beauty of functional forms, and helped me bridge the gap in my understanding between the realms of programming and mathematics. I recognized the Holy Trinity that is map, filter, and fold as fundamentally important components of virtually every meaningful program I've written. It felt like I was uncovering the primal forces that programs were composed of. It was awe-inspiring how such few and simple mechanics could be composed into arbitrarily complex systems. Complex systems are easier to build, easier to reason about, and are more reliable when assembled by composing simple parts. And so I began to strive for simplicity whenever possible (in my code, anyway). After getting a firm understanding of Scheme, it was easy enough to make the lateral move into Common Lisp. I gradually built up fluency with CLISP and SBCL. Sometime during my quest for enlightenment, a grad student in my lab suggested that I look into Haskell. To put it mildly, it didn't go well. It was too much, too soon for me about purely functional programming. All it did was confuse the heck out of me. Lisp emphasized some critically important functional features, but it still had loops and mutable variables and such. I couldn't comprehend how you were meant to get anything done with a language that lacked these, and I didn't believe that grad student when he told me, excitedly, that I didn't need them and that my code would be better without them (Marco, if you are reading this, I am so, so sorry for ever doubting you on this). While the details now escape me, I remember it was shortly after my failure to learn Haskell that OCaml emerged from the ether and revealed itself to me. And I don't know what it was about it, but OCaml entranced me. I found myself absent-mindedly seeking out more learning material for it, pushing through http://projecteuler.net problems just so I could continue writing in it, reading through the source for its standard library, because there was something about OCaml I couldn't quite put my finger on that I just loved. It was through the use of OCaml that I grokked the significance and benefits of immutable data structures and declarative programming in general, and the equivalence of iterative loops and tail recursion. When I finally snapped out of my OCaml trance, I gave Haskell another try, and I took to it like a... weasel to... whatever weasels take very well to. I took to it like I don't take to appropriate similes. It should suffice to say that Haskell and I are now very good friends. In this phase, I ended up learning a little bit of a heck of a lot of other strange languages, because I wanted to see if there were other profound lessons to be learned from them. And that exercise was moderately successful. Probably the most valuable takeaway I got from that was how awesome the http://en.wikipedia.org/wiki/Actor_model of concurrency is from Erlang (and how I sometimes wish other languages had the same bit pattern matching construct Erlang has, even though it's pretty arcane). At this stage, my thoughts on various programming languages were roughly as follows: Perl, PHP: "Don't speak to me. In fact, stand over there, away from me." Python: "It's a little depressing how much Guido van Rossum has neutered the functional programming features in Python 3. Still a solid language, though, and it's great that it's newbie-friendly." Ruby: "Oh, right, you exist." C++, Java, C#: "Ugh. Writing in these is like filling out paperwork. Kludging in support for lambdas won't save you." OCaml: "Hello, beautiful." Haskell: "Hello, ma'am." F#: "OCaml.NET. Probably my only hope for actually using OCaml in the software industry. At least the CLR means multi-core support for threads." Lisp: "Using you can still be weirdly painful and unnatural for certain tasks, and it'd be great if you were declarative. Also, I can't interact with communities focused on you without wanting to punch neckbeards." The Industrious Costya Bugs suck. They waste time and money, they make you look incompetent, and they undermine customer trust. They are bad for you, your business, your associates, and your customers. The only parties they benefit are your competitors. It is easier to make your programs work reliably when its components act predictably. Practices that let bugs go unnoticed should be avoided and discouraged. With me so far? (Read in your best Hank Hill voice:) Here are some "fun" language "features": Mutable state. Dynamic type systems. Weak type systems. Interpreted languages. These "features" help create and conceal bugs, and are therefore to be avoided and discouraged. I think most people who are aware of my preference for functional languages simply think I'm a hipster, or that I like knowing things that most people find confusing. That may have been true for Early Years Costya and admittedly for Enlightenment Costya, but it's a gross mischaracterization of Industrious Costya. Industrious Costya believes in evangelizing declarative and functional styles of programming, in encouraging their widespread understanding and use, because he well and truly believes it will make the world a better place. Clever hacks and arcane tricks aren't sexy. Predictability and safety are sexy. The "features" listed above offer the developer some convenience, but at the cost of additional responsibility, since permitting them undermines the compiler's ability to detect bugs. Experience has taught Industrious Costya that his brain is not perfect, and that it should not be overburdened with responsibilities that should be the compiler's. Industrious Costya wants to focus on the problem, not on doing the compiler's job for it. Industrious Costya wants to never see a runtime TypeError or NameError in his life. Nonetheless, Industrious Costya understands that he must sometimes suffer one or more of the above "features" to meet business needs (e.g. performance), or when using a library written by some poor Unenlightened sod, or because 95% of developers only know how to write programs that depend on these "features", or to cooperate with a team of developers who apparently love the crap out of fixing bugs or something because they keep using stupid language features that offer some totally negligible convenience at the cost of crashing your program all the gorram time whenever the customers use it. And while Industrious Costya cannot help but wonder if these people are taking this seriously, or if they're masochists or something, he is capable of sucking it up and accepting a paycheck for pretending to be a static type checker. But he will complain a lot. I continue to go after languages that give me practice with paradigms I'm not familiar with, regardless of poisonous unfeatures, because the best practices I've refined over the years are informed by all the lessons I've learned when trying to put new languages to meaningful use. I have used, and forgotten, and re-learned, and re-forgotten more programming languages than I care to count, but the ones that stick with me best are C, Python, and OCaml. My present-day thoughts on various programming languages are roughly as follows: C: "Manual memory management and weak typing are annoying, but C still reigns supreme in the world of high-performance computing." Python: "Interpreted, with dynamic typing. I am so, so sick of NameErrors and TypeErrors. It's still my go-to language for very simple tools, but I'd really like to stop using it professionally." OCaml: "No one loves you like I do. Sure wish you didn't have that GIL, though. And type classes would be nice." F#: "Please don't tell OCaml I've been seeing you." Haskell: "Zygohistomorphic prepromorphism." Perl: "Go away." PHP: "Kill it with fire." C++, Java, C#: "Knowing what I know today, I don't think I'd have the patience to use you for long." Scala: "A little too bureaucratic for my liking, but really cool otherwise. Interop through Scala is pretty much the only way you'll get me to work with Java." JavaScript: "Dynamically and weakly typed, besides clearly having been made by a deranged lunatic. I'll pass." Lisp: "Dynamically typed. Out of touch with reality. Pass." Ruby: "Oh, right, you exist."

Costya Perepelitsa at Quora Visit the source

Was this solution helpful to you?

Other answers

I have come to value speed of learning/programming much more over performance of programs. I have come to value my ego less. When I was an undergraduate, I thought it was most important to write the fastest programs possible. I prided myself on the fact that the neural networks I wrote for my AI class and the simulations I wrote for my randomized algorithms class ran *so* fast. Never mind that my C/C++ implementations often took days longer to build than my classmates who used OCaml/Python. I could run my program so many more times than they could before the assignment was due. Plus, my programs looked so hardcore! As I learned more about programming languages theory, I shifted more towards preferring statically typed functional languages like ML and Haskell. The programs were so pretty and the type-checker was so helpful in catching errors. Plus, programming in Haskell made me feel so hardcore! In between these periods of my life there was a brief interlude when I programmed in Python while doing computational biology research. It was so fun and easy but I felt like programming needed to involve more pain. In conducting my graduate school research in programming languages, I've shifted away from C/OCaml/Haskell. I wrote the first iteration of my http://projects.csail.mit.edu/jeeves/ in OCaml with a C++ backend. I spent most of my time writing optimizations rather than working on the design or semantics of the language. A couple of years in, my officemate commented that the language would be easy to embed as a domain-specific language in Scala using Scala's overloading features. Recognizing that performance was not *so* important and appreciating that Scala at least looked something like ML, I agreed to the switch. Scala made it *so* much easier to prototype language features that I became a huge Scala evangelist. (See http://jxyzabc.blogspot.com/2012/11/should-you-learn-scala.html.) After a couple of years of working with the Scala implementation, I started thinking that even Scala was too obscure for Jeeves to get users. And because I've either given up on being hardcore or because I've stopped needing to prove myself, http://jxyzabc.blogspot.com/2013/12/im-using-python-now.html.

Jean Yang

A short summary up front: I've started to care more about how expressive a language is, how productive I am and how maintainable the resulting code is. For more particular features, I need languages that can grow and languages with nice denotational semantics. This last point just means I want languages with simple and composable models. I should not have to do the job of the compiler in my head just to figure out what my program means! This is just like saying I want more declarative languages, except more specific about what "declarative" means: it turns out "declarative" isn't really well-defined in practice! Apart from this, I've also stopped caring about how popular a language is or how easy it is to learn. Neither of these concerns affects me directly! Expressiveness Increasingly, the main thing I value in a language is expressiveness: I want it to be as easy as possible to write the programs I want in the way I want. This helps with both productivity and maintainability: I can write less code which leaves me less code to maintain and I can write prettier code which then makes it easier to maintain. Basically, I figure productivity is the most important thing, both now as I write the code and later as I extend and support it. I generally do not care about getting the ultimate amount of performance from the get-go: if I need to optimize, I will only do it after profiling. Static Typing In the past, I used to think dynamic typing was the only way to get expressiveness out of a language. Considering the only statically typed languages I knew were Java and C++, this was not a surprising opinion! Since then, I have learned Haskell and OCaml, so I know this is categorically untrue: statically typed languages can be expressive. In fact, I now know enough about Haskell to say that Haskell is more expressive than most dynamically typed languages. For example, there are things that typeclasses and type inference enable that are essentially impossible to replicate with dynamic types. The same goes for actual (ie "pure", for lack of a better word) functional programming. Growing One of the most important factors behind the expressiveness of a language is its facilities for abstraction. This can easily be summed up in a single idea: a programming language has to be able to grow. "Growing a Language" was the one talk that affected my views on using and designing programming languages the most. The basic idea is that programmers should be able to extend the language. The programming language designer cannot predict every possible thing programmers will want in the future. Moreover, a language that tries to do this will be too large and unweildly to use! Instead, a language should provide a small, orthogonal set of features that let programmers naturally add new constructs and expressions as libraries. Think about the features of a programming language as a linearly independent basis for the space of interesting programs. The simplest example and quick test I've found largely comes from the talk itself: how easy is it to add a new type of number to your language? How natural is using this new type once you've added it? Java, for example, shows how not to do this: just look at BigInteger or BigDecimal for ugly code that looks nothing like Java programs with primitive ints. On the other hand, Haskell does this really well: adding complex numbers or rational numbers or fixed-point numbers or even computable real numbers is trivial. As an extreme example, you could add algebraic numbers of the form a + bφ in a handful of lines, which gets you a http://stackoverflow.com/a/6037936/286871 all without losing accuracy like you would with floats. Popularity Largely as a result of figuring out what I like about a language, I've stopped caring about popularity. Just because something is popular does not mean it's good, and popularity is not a good measure of a language's actual quality. I largely think of this by analogy to music and food; perhaps it's best called the McDonald's effect, since that's one of the most extreme examples of popularity and quality not being correlated. I wouldn't eat at McDonald's unless I really had to just like I won't use Java or Python unless I really have to. Note that I'm not saying everything popular is bad: rather, popularity is just not an indicator of quality. All I'm saying is that not everything popular is good. For some reason, many programmers do not actually agree about this in regards to programming languages: for them, popularity is all that matters! I also think about this as "worse is better is worse" (or perhaps just "worse is worse"). The whole idea of "worse is better" tells me why other people are using a particular tool—why it's popular. But it does not tell me why I should use it! If anything, it tells me why I should not use a particular tool and, more generally, why I should not always follow the herd. Design I've started really caring about the aesthetics of a language's design. Turns out this is actually important for expressiveness and maintainability. Elegance is a practical concern. I care about how well a language is put together. I am not a big fan of compromises for the sake of backwards compatibility (Scala) or for the sake of not being too different. Have you noticed how most popular programming languages are more alike than different? I am not going to choose a programming language just because it's not too different from what I'm used to. Semantics This also means I care about semantics. In particular, I care for languages that are designed with clean denotational semantics in mind. This is exactly the same idea as looking for "declarative" languages, except much better defined. Having nice semantics makes it easier to think about programs, write the programs in the first place and reason about code others have written. Most of a programmer's job is thinking; simple semantics make this easier. And, of course, semantics give a more satisfying meaning to both my programs and my language. The semantics of a programming language pervade every single like you write. If you have a language with poor semantics, you have an implicit drag over everything. This means that I am not willing to sacrifice on the semantics of my language unless I get a gigantic benefit for it. Much more than a library for parsing Yaml or anything like that. Theory Semantics naturally bring me to another point: I am no long afraid of theory. There is this common myth that theory and practice are somehow at odds, and that anything "theoretical" is somehow impractical. This is completely absurd. Theoretical considerations and practical considerations are often the same: theoretical work often has the same goals with better foundations than so-called "pragmatic" things. Theory is also a wonderful source of abstractions that are at once well-behaved, general, useful and well-understood. Abstract algebra is a perfect example: things like semi-rings pop up everywhere, behave regularly and yet are extremely useful. They're great for everything from parsing algorithms to parallel programming. And, thanks to the work of countless mathematicians, we understand the behavior of algebraic structures far better than whatever ad-hoc abstraction somebody came up with because they were araid of sounding theoretical! Learning I now think that learning a language is not a big deal. Even if that language is pretty different from what you already know, like Haskell. My particular insight was that learning a language is an O(1) operation while the benefit to using a better one is O(n) to the amount of code you write. So I do not think "ooh, it's too difficult!" is a good excuse not to pick up a language! If you do, you're just doing yourself a massive disservice. Basically, I've found I have a lot of reasons for using Haskell and languages like it. And relatively little reason to use weak compromises like Scala or languages which throw almost all of this away like Python or Go. Here's the "Growing a Language" talk I mentioned. I think I've linked to it on something like five different answer now, I just like it so much :P.

Tikhon Jelvis

A distilled version of my PL history: school (C, C++, Java, Haskell, Scheme) embedded devices (C) Fortune 20 company (Java) FOSS (C, Rust, Python) research (Python) HPC (C++) web (PHP, JavaScript, Python, Ruby) I've reached the following conclusions: Functional (or functional-style) programming will become more prominent. It eliminates most bugs and scales horizontally. Past the initial learning curve, functional code is more readable and understandable. Good static typing is necessary. Not the old school verbose style but H-M type inferencing (until something better arrives). Static typing improves performance by allowing for compile time optimizations and avoiding runtime decisions. Developer productivity is increased by eliminating a category of unit tests. Static types combined with property testing and some integration tests feels like the right amount of testing. JIT compilation is the future of compilers. Static analysis has its limitations. There are many compile time optimizations whose improvements are ambiguous until runtime. Rust's implementation of an optional garbage collector as a library is unique. This gives you the best of both worlds and is much better than C++ and D's alternatives. Python and C are my workhorse languages, but I spend my free time writing Haskell and Rust. I prioritize correctness, and then modify for performance if necessary. For the longest time I thought Python was the best general purpose language. However after using it for ~5 years now there are a few issues I can't ignore: Performance is abysmal, and I consider CFFI and Cython dirty hacks. Refactoring large Python programs for better performance is a significant undertaking which spawned the idea of Julia. I think PyPy should be the future default implementation. Writing everything in (R)Python and optimizing the compiler is better than the CPython's dual language implementation. Dynamic typing is great for smaller or one-off projects, but a lot of productivity is lost writing superfluous unit tests for robust applications. Check out GitHub's most popular Python projects. Now how many have unit tests? While Lua and V8 has proven that dynamic typing can be made relatively fast, static typed languages are still win the performance crown. GVR fundamentally does not agree with functional programming's principles, and thus it will always relegated as an afterthought.

William Ting

I started programming in my first language in Basic on a TRS-80 Model I in the 1980? I believe that's right. Every language I have learned has been for practical purposes, either school requires it, or work has either requested it specifically (the hanging chad that is Ada springs to mind), or has had a need for "something" to fit a gap. When I finally broke down and finished learning C was because the C-station owner I worked for had need of a cost analysis and predictive software, interpreted Basic was too slow, and compiled RPG-II, the only compiler that the owner had, no friggen way. I bought a C compiler and finished learning the language I never completed trying to learn it "for fun". I have never managed to learn a language just for the sake of learning it. For some bizarre reason my brain never worked that way, and it works in some strange ways. Still, any practical need and the language is there. My resume list of languages stays at 10. That's long enough to get the point across and any longer seems pretentious. Though I am about ready to drop off COBOL and learn ... I think a functional language, haven't decided which. We need a change. But we need it fast and efficient, just-in-time stuff I can overload fast. I haven't decided what to fill that missing piece with yet and it may take some looking before I settle and put my nose to the grindstone. In direct line of ascent to current time and place for work is Basic, COBOL, RPG-II, C, Assembly, C++ for school: Basic, Pascal, Fortran, COBOL, RPG-II, Java, C, C++, Assembly There are some missing dead-ends that I learned for one-time projects (like Ada), they aren't in this list because their use is no longer needed, or weren't needed for very long. Many languages I still use for translation of code. I am the person they call on to translate Fortran into C.

Jeffry Brickley

I've been programming for a very long time, and I've mostly been a generalist the whole time. If anything I've become a much stronger believer in "the right tool for the right job", and will happily switch between several languages based on what I'm doing. Here are my current top three go-to languages: JavaScript (prototyping/frontends) Python (utilities/tools) Go (servers, concurrent jobs) So mostly dynamic languages that focus on utility over performance. Because optimizing for humans gets more done than optimizing for computers.

Andreas Blixt

1980 - Fortran. I was 13. I wrote programs on paper but had no computer to run them. I was fascinated by computing! 1985 - BASIC. I didn't know squat about programming languages, but I loved programming my little ZX Spectrum. 1986 - Forth! Not.  Back to BASIC. 1987 - Pascal. Turbo. Fast and Fun. Loved it. Did a lot with it. First "large" programs. Began appreciating clean code. 1988 - C. Hated it at first but began loving it very soon (still do). First intro to Lex and Yacc. Fell in love with compiler design and programming languages, and with C/Unix. 1989 - C. First job coding C. Telecom software on Unix. C felt like second nature, but I struggled with trying to get it to be more abstract. Crazy use of macros (something I still do). In these early years, I used whatever programming language was available on the system I was using. My goal was to write the best possible program I could write with whatever was available. I had not yet begun to appreciate the qualities of programming languages that I was using. That is until .... 1990 - Miranda. She changed my life. My first introduction to functional programming. I am still picking my jaw off the floor. If you have never used Miranda before, please do so before it disappears! It was the first language I truly fell in love with. I was fascinated by the equational programming style, type inferencing and referential transparency. 1991-3 - Scheme. Finally! I could now implement *any* abstraction I could dream up both syntactically and semantically with no effort. First Class Everything! No type-checker heckling me for perfectly valid code. Scheme would eventually become by go-to language. This was also my introduction to lambda calculus, denotational semantics, category theory, and mathematical logic. I could now think about programs in a mathematical way which was strangely liberating. Meta-programming became part of my toolset. Moggi had just introduced monads and Wadler was popularizing them. 1993/4 - Lisp. Symbolics Lisp Machine. Still the finest machine I have ever developed on. CLOS was my first introduction to OO. OO has been a downhill experience since then. 1995-2001 - Java and (Visual)C++ The lowest point of my programming life. Fortunately I came out of this trough. I did use some Lisp, but it was very difficult/expensive to do on Windows. 2002-Present - Having done the whole circuit, I had firmly decided that the best way to program was functional. I settled on Scheme (now Racket) and relied on auxiliary support from C/C++, Javascript etc. I also began working with Haskell, but I still prefer Scheme over Haskell. Along the way did enough to familiarize myself with CoffeeScript, Python, Perl, Ruby, Objective C, Coq/Gallina. I still touch Java (gingerly) from time to time. My programming now is defined entirely by conciseness of notation. I use Scheme because I can invent the most concise notation and write programs with it. The functional way of thinking is now second nature. Even if I have to code in C or C++ I design the system functionally, often using Scheme to generate the final code. As I continue this quest of ease of expression, my design of software is increasingly influenced by models and theories of Mathematical Logic. Statelessness is default in designing algorithms. Aspect-Orientation is a big part of my software design, but I use OO sparingly. My only complaint about the toolset I use is that it is difficult to readily transfer my methodologies to mobile OS's. I'm hoping this will get fixed soon, else I'll have to do something about it :-)

Anurag Mendhekar

Chronologically : BASIC x86 Assembler C Delphi C++ Java Javascript python I've dabbled with scheme a bit during an online PL class, for homework and some very trivial C# for test code Unlike most programmers, I almost always code close to the metal - whether I'm writing video editing software or OpenGL based rendering code, or a high performance GIF encoder. 98% of the code I ever wrote was C++ Even with javascript, it was a canvas based perspective renderer, not so much looking like JS as C. The other big JS code I wrote was a complex browser history analyzer, again more like C than JS, just a lot of SQLite access and string manip. The more layers there are between whats running on the hardware and the code, the more weird I feel. Dynamic typing feels utterly floppy like running in wet bathroom slippers. I've not yet found the motivation to delve into functional programming for anything serious. The pundits very wisely recommend Haskell, F# and so on, but I really don't see the pot of gold at the end of the rainbow. Somehow, it seems a bit weird to me to insist on statelessness dogmatically, when every second my program is rewriting and mutating gigabytes of data. I feel like a materialist hedonist encountering Zen Buddhism - I can see it seems to have great value to its proponents, but I can't see myself being there yet. Meanwhile C++ is my tool of choice and I prefer it to everything else, because it simply starts with a philosophy of being everything to everyone - It is the dark side, and it is an easy path to great power. Whether I'm writing browser plugins, or CUDA number crunching code, I can use C++ with whatever level of complexity I need. I see no point in using C anymore, because it's simply so tedious to get things done, and is rarely faster than good C++ So, to answer the question, no, my choice of programming languages remained more or less same, C++ mostly - It gets shit done fast. The language has no pretense - it's complex and you stay away from the stuff you don't understand well. And yes, you can even combine inline assembler and template metaprogramming in the same chunk of code, if you need to! Someday I want to try writing a data compression program in Haskell, simply to taste what the hell they are raving about... But I am not sure I won't leave the dark side of the force so easily.

Vivek Nagarajan

Started with raw PHP and jquery, got pretty good at both of them, and got pretty comfortable using various frameworks and libraries, and writing frameworks and libraries. Often ventured into other "fad" languages like python and ruby and node, but kept coming back to php, because kept realizing that php gets many small things wrong, but it gets a lot of big things very right. Picked up java, scala and haskell, realized the importance of static typing, and the joy of proper funtional programming, incorporated that stuff into my programming habits. Currently working on an easy to use framework and a pretty nifty orm in php and scala, and working on replacing php with scala as my weapon of choice prog-lang. And just when i make decent headway in this plan, hhvm "hack" comes along .. a statically typed, interpreted lang, with a world class type-system and proper facilities for functional programming, ie, something with all the strengths of php and very few of its weaknesses, and a lot, lot more. So waiting for hhvm hack to kind of "mature", and working on my scala foo.

Kapil Verma

Related Q & A:

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.