Wonder how the brain does it. I mean, human learning has similar issues, right? But we can still occasionally generalize. Maybe it’s that we’re not good enough at statistical learning to satisfactorily predict our inputs, and that gives whatever system we have for generalized abstraction learning a chance to act? If so, it may be that we will need to train networks with restricted size on toy problems at first, then only scale them up once they’ve grasped basic logic.
Wonder how the brain does it. I mean, human learning has similar issues, right? But we can still occasionally generalize. Maybe it’s that we’re not good enough at statistical learning to satisfactorily predict our inputs, and that gives whatever system we have for generalized abstraction learning a chance to act? If so, it may be that we will need to train networks with restricted size on toy problems at first, then only scale them up once they’ve grasped basic logic.