Realism and the purpose of science

Whether or not we can directly comprehend reality is irrelevant. What is relevant how well we are able to predict the future. That is the purpose of science, and the only important factor to strive for, as that is what will give us increased control over our own fate. (Of course, this assumes that our purpose is to increase control over our own fate – some would object to that, mostly leftists and other types of collectivists.)

For example, whether or not something called “electron” actually exists is not important. What is important is that our model can use the construct it calls “electron” to predict future observations.

Our model consists of two major things:

  1. Constructs that model reality (for example the construct of “electron”)
  2. A way to translate observable reality into the language of our model

Using these two main components, our model can increase our ability to understand (and thus control) the future in the following way:

  1. We start by making an observation of the current reality, translating that into our model
  2. We then introduce an cause in our model, and verify through observation (may be done indirectly) that the same cause happens in reality
  3. We make a prediction in our model of what a future observation will look like, then we verify by making an observation and see if it is what we predicted

Whether or not the constructs used in our model do or do not actually exist in reality (i.e. Realism) is thus irrelevant. What is relevant is our connection to reality, and our connection to reality is the observations we can make. Thus, it is not important to predict actual reality, but to predict what we will observe using our model’s constructs.

Here’s a picture describing how this looks:

predicting future observations

Whether or not our model exists in reality is not important – what is important is our ability to predict future observations

Share this!Share on FacebookShare on Google+Tweet about this on TwitterEmail this to someone
the right road by iterative learning

Iterative Problem Solving

When it comes to solving problems, there are two main lines of thoughts:

  1. Plan, plan, do
  2. Do, learn (repeat)

I’ve found during my life experience that the second method is almost always the best one. It arrives at the solution quicker, and it solves the problem better.

In this post, I’ll explore why that is – how iterative problem solving actually works.

Solving math problems iteratively

During homework one day when I was a kid, I discovered that I could more easily solve my math problems by just testing and seeing what would happen. Even if I had no idea what would happen, I simply started jotting down a solution – with no idea as to whether it would hold or not.

What I discovered was that the problem of “just trying something” would yield a solution far quicker than thinking ahead.

I proudly announced my discovery to my teacher. I don’t know if she really understood what I was talking about, but she applauded me nevertheless, encouraging me to keep doing what I did.

This approach to problem solving has stuck with me ever since. Now, I’m at a point where I will explore the mechanisms behind this type of problem solving to understand when and where it can be applied. In order to do that, we have to look at the mechanisms that makes this work.

How iterative problem solving works

In the situations where iterative learning works, what happens is as follows:

You have no clue what the solution is, but you do have some (far from correct) ideas or guesses or assumptions.

So based on these ideas/guesses/assumptions, you test a quick and dirty solution. What you arrive at is probably very wrong, but you will have gained something immensely valuable: Learning.

By testing your ideas, guesses and assumptions very quickly, you will see the actual results they yield. This is feedback, which gives you learning.

This, during the process of trying and learning, the amount of additional learning you will have gained will probably be far more than what you would have concluded/learned if you tried to figure out the “correct” solution without getting your hands dirty to actually try immediately.

Using those new learnings, you will have revised your guesses, assumptions and ideas, and you can try again. This time, from a higher level of understanding.

By repeating this process, you will continuously increase your learning until you are at a point where your assumptions, guesses and ideas are correct enough to bring you to the solution.

A formalized iterative learning process

Actually, learning IS making an assumption (read: guess) based on what you do know, then testing those assumptions to see if they hold.

So in your original “try and learn” approach, you might have tried to solve the problem by assuming (read: guessing) three things: Assumption 1, Assumption 2 and Assumption 3 (A1, A2 and A3).

If the assumptions produce the correct answer, great! You have verified that all three assumptions are correct.

If you get the wrong result, at least one of the above assumptions must be wrong. This, in itself, is valuable knowledge, because it presents you with two choices:

  1. If you have other ideas (for example A4 and A5) which you think are likely to produce the correct answer, simply try to solve a problem again using these.
  2. If you don’t have any more ideas, or if you have too many possible ideas to test, then you might want to drill down to A1-A3 to draw additional learnings about why they failed.

Number 1 is easy: Simply repeat the process.

Number 2 will create a “recursive iterative learning” cycle.

Recursive iterative learning

Pick one of your original assumptions to drill deeper into, for example A1.

Formulate sub-assumptions that underlie A1. For example, you might have some ideas (assumptions) about why A1 can’t be correct: Let’s call these A1.1, A1.2 and A1.3.

Pick one of these sub-assumptions, preferably one that would lead to some “chain reactions” in terms of your original solution (i.e. if any of them are correct, then it would also eliminate or strengthen some of your other assumptions). Then test it.

If it succeeds, great: You have learned something new. This new learning will have consequences for at least A1 (striking it from your list of possibly correct assumptions), and possibly more.

If it fails, repeat the process by testing the other assumptions in this level (A1.2, A1.3 and so on), or create new sub-sub-assumptions and test the sub-sub-assumptions (for example A1.1.1, A1.1.2 and so on). Do this until you can draw a definitive learning, and go back in your recursive learning chain and let all the recursive learnings fall into place.

You have now drawn a set of learnings from your original guess. From these new set of learnings, you can make new assumptions that are closer to the truth, test them, and repeat the process. With each iteration, you will come closer to the truth until you finally arrive at it.

What it looks like in real life

In reality, nobody (I hope..?) thinks like the above. Instead, the process happens unconsciously when we just “try something”.

For example, let’s take the math problem I was trying to solve as a kid described above. Here’s what actually happened:

I was sitting and looking at the problem, with no clue as to how I was supposed to solve it.

So instead of sitting there, stuck in my own thoughts, I decided to simply jot something down. I started by drawing a character, and then the next character. Before I started the process of jotting down each character, I had no idea which character would actually be “jotted down”. Instead, the actual character came to me as I started jotting.

A couple of times, I realized that the character or formula I jotted down didn’t make sense (=> my first iterative learning, happening organically). So I erased, and tried again (using the learning from the previous step to try something new, i.e. realizing that A1 didn’t work and trying A2 instead).

At some point, I thought I had arrived at the correct solution (using A2). But I realized everything I had done had been garbage, because it didn’t turn out the way I wanted it. I started wondering why the heck it didn’t work. I had an idea (A1.1). So I started experimenting at the side trying to answer the question in my mind as to why my original solution didn’t work (testing A1.1). Suddenly, I got an interesting result (proven A1.1) which gave me a new idea (A3) which I used when starting with the original question from scratch (testing A3), which arrived at the correct conclusion.

In reality, the process is even messier than this. But the actual process is the same, only more complicated (more branches, more assumptions and sub-assumptions), not different.

A couple of scenarios in which you can apply iterative learning

So where can you actually apply “iterative learning”? Well, as it turns out, in a lot of places:

Programming: Trying a solution and seeing where it leads you, drawing learnings from that destination and trying again (Agile Software Development)

Starting companies:  Start from where you are, using the knowledge you have, make a quick and dirty roadmap, start the journey, and learn and adjust as you go (Lean Startup).

Building rockets: Build a rocket as quickly as you can, using what you know (A1, A2 etc.). When the rocket crashes, analyze why it crashed, draw a new conclusion (A1.1), make a new assumption (A3) and build another one. (Elon Musk’s methodology as described in this biography)

And probably much more 🙂

Summary of iterative learning

So in summary, when you have a problem, even though you know that you don’t know the answer, simply assume things and get started. Then learn from the results you get, and start again with the higher level of knowledge you have. And so on, until you have ruled out all but the correct solution.

Share this!Share on FacebookShare on Google+Tweet about this on TwitterEmail this to someone

Why Bitcoin will go down to near zero

Why would anyone pay their USD for a BTC? Because:

  • they believe value of BTC will increase in the future
  • arbitrage (current value of BTC is such that by selling something in BTC and immediately transferring it to USD, he makes more money than selling it in USD)
  • there is certain amount of production, and certain number of BTC in the market

And why would a merchant sell something for BTC instead of USD? Because he/she believes that the amount of BTC he/she will be exchangeable for other goods or USD worth the same amount as the USD he could have charged you instead. So again, based on belief.

There is no “intrinsic” value in BTC, except for the fact that people think it is valuable. Even if there was 1000 BTC out there, and nobody thought that it was worth anything, it wouldn’t be worth anything. So BTC is entirely speculation.

But let’s say nobody believes BTC is actually worth anything. Then one merchant decides that he believes a BTC is worth a car, and announces that he will sell cars for 1 BTC. Suddenly, people who want a car will exchange their USD to BTC very cheaply, and buy that car. But if many people want to buy that car for 1 BTC (which they will – because they can sell it for more USD), then suddenly many people will start competing for BTC, driving up the price of the BTC compared to USD that 1 BTC suddenly becomes the equivalent of the USD price of that car. This means, that merchant will have driven up the price of BTC to the USD price of that car, for as long as he can keep selling that car for 1 BTC.

At the same time, there will be “fanatics” who refuse to pay USD – who will only buy a car if they can pay with BTC. And other car dealers will lose these customers if they don’t offer it for BTC. Naturally, they will offer similar value as the first car dealer – because otherwise they will either lose money, or lose customers. So the first car dealer will have set the BTC value to an arbitrary value that he/she picked to start with.

What will happen with Bitcoin’s price if this magical car dealer disappears?

Now, let’s examine what happens with the price if this car dealer disappears from the market? (I.e. can no longer sell an unlimited number of cars at 1 BTC per car.)

In reality, what this means is that there is no controlling authority determining the price any more. Our original car dealer could control the price because he could mysteriously produce an unlimited number of cars, never going bankrupt. Because of this, he could determine whichever price he wanted, and he arbitrarily set 1 BTC per car. Because his ability to keep producing cars was unlimited and not dependent on what other people would pay him in USD for his BTC, he didn’t need to follow any price indications. But other people could not set whichever price they wanted – they had to follow his price example, otherwise they would either lose customers or USD. Thus, he became the price setting authority.

Now that he’s gone, who will set the price and how will the price fluctuate?

The simple answer is – nobody and everybody. What will happen is that everybody will look at everybody else, trying to second-guess each other as to how many USD others will be willing to pay for a BTC. There will be no “intrinsic value” unless someone, for some reason, will offer something valuable in return for those BTC. And the only reason someone would do that, would be 1) speculation, or 2) that an authority which doesn’t have to produce anything can force you to pay them that thing. There is nothing in BTC which is inherently valuable – unless you count the “fun factor” of paying with it – but how much is that actually worth, in USD?

Which is exactly why fiat currency is valuable. Because inherent demand for it exists, since this is the currency the government wants to get paid in.

There is, however, a “roof” on how much value BTC can have. And that roof is determined by the amount of BTC out there in the market. If there are not enough BTC in circulation to pay for a product, then prices can’t rise any more – people simply won’t pay those prices.

So it seems the case that BTC is ultimately doomed to fail, or have very little value. How little? It depends on hur much you would be willing to pay for the fun factor of being able to pay with it. Because that “fun factor” is going to determine how many people want to have BTC in their pockets so that they can pay their friends with it – and the number of people who want to have it, and how much they want to have it, will determine how much they are ready to pay for it.

Gold? Gold is solid. It feels heavy. It shines. Its “fun factor” is significantly higher than that of BTC. It also has a solid history and its value is buried into people’s minds. It is told about in countless stories, always reinforcing its value for us.

BTC could in achieve the same thing. But its value came from the fact that criminals used it to perform criminal activities. There is no group of people out there who would use BTC to do something that money can’t do. Exchange money over long distances? Money will be able to do that. Owning BTC will not be needed – just the transferral protocol.

Miners and the infrastructure? Yeah, they have a lot of BTC. But it’s not the amount of BTC they have that counts. Its what others are willing to pay them to get their BTC. Miners can’t do squat about the fact that soon nobody will want their BTC. They can mine and mine – and collect more and more BTC. And beg people to take their BTC and give them valuable stuff in return for it (USD or products). But no amount of power and infrastructure can make them force people to take their worthless BTC, if nobody wants to have it.

Can miners offer services and take BTC as payment for those services? Short answer – no. Because unless they can create services for free, they will go bankrupt doing so – since they won’t be able to purchase anything with BTC nobody wants back from them. They will just collect even more BTC, at a faster rate than before. And ultimately go bust quicker.

With a roof on the upside, and no limit on the downside (except wipe-out), I am afraid BTC is doomed to go down in history as one of those experiments that taught us that regardless of how many conspiracy theories and ideas we might have about the “evilness” of the state, the way the system works today does so for a reason. Unless an authority (individual, organization or a group of people) want to gather BTC so much that they would pay for it. Which is exactly how all free markets work. Unfortunately, I can see no reason that someone would.

(P.S. Note that this doesn’t mean that BTC will disappear and die. It only means that BTC will have a really low exchange rate. Basically, all the BTC in the world will be worth a certain amount of dollars, which will be equivalent to the market need in order to perform Bitcoin’s core value proposition – which is to record ownership of records in an easy way or to make purchases over the net cheaper and gaining the extra money by being able to offer prices slightly lower than the credit card fiat currency price. But that doesn’t mean BTC will be valuable – because the merchant will likely want to transfer BTC to USD immediately and keep the money as USD. You don’t need many BTC to perform these operations. Only a couple of million Satoshi.)

Share this!Share on FacebookShare on Google+Tweet about this on TwitterEmail this to someone

Life decision making process with acceptable worst case and unlimited upside

Barbell strategy decision making heuristic

When making an important, potentially life changing choice where some of the options may be irreversible, it is important to thoroughly analyze the different options before choosing one. But it is impossible to weigh options against each other if you don’t have objective criteria against which they can be measured.

I will propose here that because of the uncertainty in future prediction, and the existence of so called positive and negative “black swans” (unexpected, impossible to predict, high consequence events), the best strategy is the “barbell” strategy.

Continue reading

Share this!Share on FacebookShare on Google+Tweet about this on TwitterEmail this to someone

A more fluid way to model reality into data models

How the ambiguous nature of reality makes it difficult to model in a database application

I was reading the book “Data and Reality” (highly recommended) about how to model reality in a database application. In its first chapter, the book talks about the difficulty in doing so because of the inherent ambiguity of reality. However, you don’t want your database to be ambiguous – it has to be structured in a way that lets you efficiently categorize, separate, search, perform operations, etc.

Or does it?

How reality is ambiguous (from a data modelling perspective)

Reality is ultimately not a set of categories, states, etc. Humans interpret and model reality that way, using computers, language and other constructs, in order to interpret it in the most efficient way for our purposes.

At a very low level (enough for most of our purposes, if we are talking about practical everyday or business applications), the best model to approximate reality is simply physics. From the basic laws of physics, we can derive everything else (assuming higher level than quantum physics for now). We could, in theory, model reality as a set of atoms and molecules with physical properties that interact with each other. This makes reality incredibly fluid.

Any time you want to simplify that fluid reality into a database model, you loose that ambiguity – and with it, the true mapping of your database to reality (the more you simplify it, the more rough your approximation of reality will be, and this means real problems in your business applications).

As a data modeler, you trade flexibility for CPU

In reality, a collection of molecules which together create a metal rod is just that – a collection of molecules organized in a specific fashion with endless amounts of physical properties based on that assembly. But your database, depending on what it is designed to do, can categorize that reality as either a metal rod, a pipe, a baton or something else. And it may look at different properties of this thing depending on what the application is designed to do. For example in some cases the weight will be important, in other cases the ability to lead electrical current. This is the fundamental problem of modelling reality in a database. Which aspects do you model? Which aspects do you ignore? How do you categorize the thing you are modelling?

So, as a data modeler, you are making choices. These choices will limit that incredibly flexible “collection of atoms” thing into a more narrow field of use. Modelling it in a certain way, making those choices, necessarily limits what you see that thing as, and that reduces your application’s flexibility. If you call it “a collections of atoms and molecules of X type grouped in Y way”, you have incredible flexibility. Only your imagination and physical properties limits what you can use that thing as. If you categorize it as a rod, you disregard all those physical properties, and you see it as a rod, and a rod only.

By categorizing that thing upfront (based on your database design which assumes certain things), you are in fact making a trade-off: You give up flexibility for clarity and speed. The less defined your categorization is, the more processing power and intelligence you require to find, group, understand and make decisions based on the things you are modelling. Basically, by creating a category, you are making a decision upfront that will make your processing power more scalable (because you have made one decision that you can easily query for many times in the future – if you created the category “rod” and put objects into this category, it will be very easy to find all rods in your database) but also more rigid (simply because you HAVE made a decision upfront, and decisions are often irreversible).

Whereas if you don’t categorize, but instead put objects into the database based on their properties alone (for example a lower level of categorization of a rod would be “thing created of metal with length x and width y”), you have more flexibility in the future (you can define those things as either rods or something else), but the approach is less scalable. Each time you want to find all things in the system that can be used as a rod, you have to first define which properties “a rod” as per your definition has, and then compare each object in your database to that definition.

Can we model and search based on sensory information rather than properties or categories?

We can follow the above logic further. If we go even lower level, you may not need to even provide any properties yourself. Perhaps the properties can be derived from more basic information – such as sensory information. Perhaps you can feed into your database images of rods and other sensory information such as temperatures of rods, roughness of rods, sounds rods make when they hit something, and so on. From that sensory information, it should be possible to derive any of your possible requirements.

Perhaps, we can even take your use case to a lower level. Perhaps you won’t search for rods by providing search properties, but simply another piece of sensory information (image, feel, etc.). Thus, without any categorization inherent in your database, you could let the system find all the rods similar to the rod you scanned in. You could then improve the search by giving the system feedback on the rods it presented.

Of course, this doesn’t meant that the items in the system aren’t categorized. They may be – but the categorization is internal in the system, not externally visible to you.

But doesn’t that mean that we haven’t actually solved the problem of how to categorize reality better? Because what we have done is to move the responsibility of categorizing from ourselves to the system itself. The categorization, simplification, and “upfront decision making” of how to model reality is still going on behind the scenes, and we really haven’t gained any flexibility at all (I will go as far as to say that such a system can impossibly be flexible in the way we want it – and the way we want it is that it models reality without making any upfront decisions about which category something belongs to – because such decisions is actually the opposite of creativity, and creativity can be described as the ability to not pre-categorize objects, thus finding new ways to use existing tools).

What we are searching for is a way to save and retrieve information quickly without sacrificing flexibility. How can we do that? It seems like as soon as we start categorizing things, we gain speed but loose flexibility.

What if we copy and simulate reality instead of “represent” reality?

Well, actually there is another way. A different way if storing information which doesn’t even use the notion of categorization. We can model the thing we are saving physically and directly, by simulating it in another system. Basically, we can copy the physical world physically into another physical object, the “other” physical object having some properties which the first physical object lacks, namely 1) ability to change and adapt quickly and 2) ability to iterate quickly (actually follows from the first point). Bear with me for now, and I will describe what I mean below.

The “simulator” gets sensory input about the “real” object, and creates a replica of it inside itself. There are no categories, properties, data points. Just a simulation. This “simulator” can store many such replicas inside itself, each corresponding to real sensory information that it gets from its “reality sensors”.

It can also invent imaginary simulations, because it has the ability to copy, manipulate, and play with its own simulations inside itself. Basically, it is a simplified, more dynamic simulation of all the sensory input it has ever received from the real world, plus it’s own ability to manipulate those simulations (which corresponds to human imagination). When manipulating (or let’s call it “playing with”) it’s simulated physical world, our “reality simulator” may see recurring patterns. It may then save those recurring patterns as a separate “pattern object”. In the future, when saving new objects that are similar to the pattern object, it can simply save how those objects differ from the pattern objects, thereby saving space and CPU (because it can save how the pattern object usually interacts with other objects, and derive from that how the actual object it is simulating should behave – including its differences from the pattern object). 

So we are going form a “representing” paradigm to a “copying and simulating” paradigm. Depending on the physical implementation of this simulator, it may have both the efficiency of the “modelling” paradigm, and the flexibility of the real world.

Interestingly, such a “reality simulator” exists. Evolution has chosen it as the best way to model reality. About every decision making organic entity on earth (including humans) uses it every day to copy and store a version of reality based on sensory input inside their nervous system. Humans are the most advanced species doing this, using our brains. We use this reality simulator to copy a version of reality inside our heads, predict the future based on simulations of this copy of reality inside our heads, and to query and retrieve information (report on) reality to make (business or life) decisions.

How would a reality simulator be implemented?

A reality simulator (and our brains) doesn’t actually create an actual replica of actual objects. Instead, physical objects are mapped to sensory information, and that sensory information represents actual physical objects, and is stored. This is equivalent as simulating the physical object, since reality is mapped to specific sensory input.

This has some implications to our theory. It means that instead of creating a rigid model of reality which consists of set categories, we instead present reality to our computer as a set of sensory signals. Instead of representing “object with properties x or y” we present “vision sensory signal a, sound sensory signal b, touch sensory signal c” and so on. This combination if sensory signals can then represent anything that exists in reality, and the computer can replicate, duplicate, iterate on, combine etc those representations in its internal world. In that way, the computer can simulate all of reality in its internal systems.

For example, when the computer senses a rod, it stores the rod (the sensory signals that represent the rod) inside its simulated reality. After sensing multiple rods a number of times, the computer may start to recognize that these objects are very similar to each other, and seem to be used for the same purposes. So it may create a “generic rod” pattern in order to optimize CPU and space, and predict reality based on how a generic rod usually interacts with other objects.

How to take it to computers

As long as the human created digital computer world does not move from a “representing” paradigm to a “simulation” paradigm, we will not have strong AI, regardless of how powerful CPU gets. Because the representing paradigm presents inherent limitations, expressed somewhat above, that cannot be solved for the trade-offs discussed above (and probably other reasons which I haven’t discussed).

However, if we embrace the simulation paradigm, we can create the same strong AI found in our brains, but improve on the inherent weaknesses of a living organism. As organic implementations, our brains consists if living tissue that deteriorates fast, and has limitations in scale, power, and size. By representing the same simulation paradigm but implemented in a non-organic physical system, we can overcome these weaknesses and create a brain unconstrained.

Share this!Share on FacebookShare on Google+Tweet about this on TwitterEmail this to someone

How US decision to torture one individual led to the rise of ISIS

I would like to say a Merry Christmas, but I will instead take the opportunity to remind people of what happens when a few people in government (or just government in general) gets too much power. I will illustrate this by summarizing how a few people in the US made decisions roughly 10 years ago, that led to the rise of ISIS today (main article: http://en.wikipedia.org/wiki/Abu_Zubaydah):

  • US captures Abu, who is an important link in “the war against terrorism”
  • FBI gets crucial and important information using non-torture methods. All good so far.
  • Then Bush, Dick Cheney and Condoleezza Rice panic and give direct orders to CIA that “harsh methods” (torture) should be used
  • Under torture and in desperation, Abu provides any piece of information he believes may stop the torture, regardless of whether it is true or not (nothing useful came out after the harsh methods started)
  • One of the pieces of (false) information from Abu leads Bush and his administration (likely in panic and group-echo mode) to pursue an unfounded war on Iraq
  • Iraq’s fall leads to (aside from unprecedented waste) instability from change in power structure in the middle eastern region, which is today a major reason for the rise of ISIS

What can we learn from this?

  1. Do not trust that government will make the best decisions – they are ultimately just a group of people, just like you and me
  2. The ends don’t justify the means – the individual is always the most valuable, not the collective, and the sum of multiple individuals isn’t more valuable than any single individual (doesn’t make mathematical sense, but as soon as you violate this, you get terror)
Share this!Share on FacebookShare on Google+Tweet about this on TwitterEmail this to someone

Stylus-based Android note app that syncs with Google Drive

What I wanted

Stylus based note taking app which can take notes as good as S Note, AND save to and open from Drive / sync with Drive, allowing edit of Drive files from within app or from within Drive itself.

Apps tested

S Note

I can sync the files manually into Drive, but I won’t be able to retrieve them in a nice way in the future because they are stored only as thumbnails.

LectureNotes

Stylus features:

  • Very good (pressure sensitive, different styles, etc.)
  • Can cut and move pieces

Export/import:

    • Can export single pages as pgn
    • Can export multiple pages as .zip
  • This .zip file contains a bunch of .png files. You can import them from within the app, but you can’t go into Google Drive and open the file with your app. If you import it and edit, then upload again, it will create another copy with the same name. It can’t edit and save directly. In addition, importing seems to be done only in an existing notebook, not creating a new notebook.
  • Can export multiple pages as PDF to Drive, but can’t edit those PDF files.

Summary benefits:

  • Saves in PNG
  • Very good stylus functions

Summary negatives:

  • Can not edit files with multiple pages because 1) it can’t edit pdf files, and 2) multiple page files are saved as a zip file containing multiple .png files, which makes it impossible to edit them in Drive (although you can of course edit them in the note app by downloading from within app, editing, then uploading – but very clumsy way to do that)

To solve this, I installed the app FolderSync, which syncs a LectureNotes folder with a Google Drive folder. I’ve tested both ways and it works well.

Solution

Use LectureNotes and FolderSync together, this fulfills all criteria.

(Note: I could not do this with the app S Note since it saves the files in a custom format.)

(Note 2: This method may be possible with other note taking apps than LectureNotes, but I think LectureNotes was good enough so I did not test others. Another contender may be Papyrus – but I have not checked how and in which file format it saves its notes. If it is PNG or another common file format, then the method would work just as well on it, or any other note taking app.)

Share this!Share on FacebookShare on Google+Tweet about this on TwitterEmail this to someone